International Statistical Review

Dimension Reduction with Linear Discriminant Functions Based on an Odds Ratio Parameterization

Angelika van der Linde

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

The association of two random elements with positive joint probability density function is given by an odds ratio function. The covariance is an adequate description only in the case of two jointly Gaussian variables. The impact of the association structure on the set-up and solution of problems of linear discrimination is investigated, and the results are related to standard techniques of multivariate analysis, particularly to canonical correlation analysis, analysis of contingency tables, discriminant analysis and multidimensional scaling.

Article information

Source
Internat. Statist. Rev., Volume 71, Number 3 (2003), 629-666.

Dates
First available in Project Euclid: 21 October 2003

Permanent link to this document
https://projecteuclid.org/euclid.isr/1066768711

Zentralblatt MATH identifier
1114.62339

Keywords
Association Odds ratios Kullback-Leibler distance Mutual information Canonical correlation analysis Contingency tables Discriminant analysis Multidimensional scaling Correspondence analysis Logistic regression

Citation

van der Linde, Angelika. Dimension Reduction with Linear Discriminant Functions Based on an Odds Ratio Parameterization. Internat. Statist. Rev. 71 (2003), no. 3, 629--666. https://projecteuclid.org/euclid.isr/1066768711


Export citation

References

  • [1] Albert, J.H. & Chib, S. (1993). Bayesian Analysis of Binary and Polychotomous Response Data. J. Amer. Statist. Ass., \textbf{88}, 669-679.
  • [2] Amari, S.I. (1990). Differential Geometrical Methods in Statistics. New York: Springer.
  • [3] Applebaum, D. (1996). Probability and Information. Cambridge: Cambridge University Press.
  • [4] Breiman, L. & Ihaka, R. (1984). Nonlinear Discriminant Analysis via ACE and Scaling. Technical Report 40, Dept. of Statistics, University of California, Berkeley.
  • [5] Blyth, S. (1994). Local Divergence and Association. Biometrika, \textbf{81}, 579-584.
  • [6] Christensen, R. (1990). Log-Linear Models. New York: Springer.
  • [7] Cover, T.M. & Thomas, J.A. (1991). Elements of Information Theory. New York: Wiley.
  • [8] Cox, D.R. & Brandwood, L. (1959). On a Discriminatory Problem Connected with the Works of Plato. J. Roy. Statist. Soc. B, \textbf{21}, 195-200.
  • [9] Cox, D.R. & Wermuth, N. (1996). Multivariate Dependencies: Models, Analysis and Interpretation. London: Chapman and Hall.
  • [10] Gokhale, D.V. & Kullback, S. (1978). The Information in Contingency Tables. New York: Dekker.
  • [11] Goodman, L.A. (1985). The Analysis of Cross-Classified Data Having Ordered and/or Unordered Categories: Association Models, Correlation Models, and Asymmetry Models for Contingency Tables With or Without Missing Entries. Ann.Statist., \textbf{13}, 10-69.
  • [12] Goodman, L.A. (1986). Some Useful Extensions of the Usual Correspondence Analysis Approach and the Usual Log-Linear Models Approach in the Analysis of Contingency Tables. Int. Statist. Rev., \textbf{54}, 243-309 (with discussion).
  • [13] Goodman, L.A. (1991). Measures, Models, and Graphical Displays in the Analysis of Cross-Classified Data. J. Amer. Statist. Ass., \textbf{86}, 1085-1138 (with discussion).
  • [14] Goodman, L.A. (1996). A Single General Method for the Analysis of Cross-Classified Data: Reconciliation and Synthesis of Some Methods of Pearson, Yule, and Fisher, and Also Some Methods of Correspondence Analysis and Association Analysis. J. Amer. Statist. Ass., \textbf{91}, 408-428.
  • [15] Gower, J.C. & Hand, D.J. (1996). Biplots. London: Chapman and Hall.
  • [16] Grambsch, P.M., Randall, B.L., Bostick, R.M., Potter, J.D. & Louis, T.A. (1995). Modeling the Labeling Index Distribution: an Application of Functional Data Analysis. J. Amer. Statist. Ass., \textbf{90}, 813-821.
  • [17] Hastie, T., Tibshirani, R. & Buja, A. (1994). Flexible Discriminant Analysis by Optimal Scoring. J. Amer. Statist. Ass., \textbf{89}, 1255-1270.
  • [18] Hastie, T., Buja, A. & Tibshirani, R. (1995). Penalized Discriminant Analysis. Ann. Statist., \textbf{23}, 73-102.
  • [19] Huang, X.-N. & Li, B.-B. (1991). A New Discriminant Technique: Bayes-Fisher Discrimination. Biometrics, \textbf{47}, 741-744.
  • [20] Jessop, A. (1995). Informed Assessments. New York : Ellis Horwood.
  • [21] Joe, H. (1997). Multivariate Models and Dependence Concepts. London: Chapman and Hall.
  • [22] Jolliffe, I.T. (1986). Principal Component Analysis. New York: Springer.
  • [23] Jones, M.C. & Rice, J.A. (1992). Displaying the Important Features of Collections of Similar Curves. The American Statistician, \textbf{46}, 140-145.
  • [24] Kent, J.T. (1983). Information Gain and a General Measure of Correlation. Biometrika, \textbf{70}, 163-173.
  • [25] Kshirsagar, A.M. (1972). Multivariate Analysis. New York: Dekker.
  • [26] Kullback, S. (1959). Information Theory and Statistics. (Dover Edition, 1968). Mineola, New York : Dover Publications, Inc.
  • [27] Lauritzen, S.L. & Wermuth, N. (1989). Graphical Models for Associations Between Variables, Some of Which Are Qualitative and Some Quantitative. Ann.Statist., \textbf{17}, 31-57.
  • [28] Mardia, K.V., Kent, J.T. & Bibby, J.M. (1979). Multivariate Analysis. (Tenth edition 1995). New York/London: Academic Press/Harcourt Brace & Company.
  • [29] Mari, D.D. & Kotz, S. (2001). Correlation and Dependence. London: Imperial College Press.
  • [30] Nelsen, R.B. (1999). An Introduction to Copulas. Properties and Applications. New York: Springer.
  • [31] Osius, G. (2000). The Association between Two Random Elements: A Complete Characterization in Terms of Odds Ratios. University of Bremen, Mathematik-Arbeitspapiere No. 53. (www.math.uni-bremen.de/$^\sim$osius).
  • [32] Plackett, R.L. (1974). The Analysis of Categorical Data. London: Griffin.
  • [33] Rao, C.R. & Kleffe, J. (1988). Estimation of Variance Components and Applications. Amsterdam: North-Holland/Elsevier.
  • [34] Ripley, B.D. (1996). Pattern Recognition and Neural Networks. Cambridge: University Press.
  • [35] Seber, G.A.F. (1984). Multivariate Observations. New York: Wiley.
  • [36] Soofi, E.S. (1994). Capturing the Intangible Concept of Information. J. Amer. Statist. Ass., \textbf{89}, 1243-1254.
  • [37] Tomizawa, S. (1991). Shannon Entropy Type Measure of Departure from Uniform Association in Cross-Classifications Having Ordered Categories. Statist. Prob. Lett., \textbf{11}, 547-550.
  • [38] van Houwelingen, J.C. & le Cessie, S. (1988). Logistic Regression, a Review. Statist. Neerl., \textbf{42}, 215-232.
  • [39] Witt, L.D., McKean, J.W. & Naranjo, J.D. (1994). Robust Measures of Association in the Correlation Model. Statist. Prob. Lett., \textbf{20}, 295-306.