Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.
There are a wide array of smoothing methods available for finding structure in data. A general framework is developed which shows that many of these can be viewed as a projection of the data, with respect to appropriate norms. The underlying vector space is an unusually large product space, which allows inclusion of a wide range of smoothers in our setup (including many methods not typically considered to be projections). We give several applications of this simple geometric interpretation of smoothing. A major payoff is the natural and computationally frugal incorporation of constraints. Our point of view also motivates new estimates and helps understand the finite sample and asymptotic behavior of these estimates.
A bivariate distribution can sometimes be characterized completely by properties of its conditional distributions. The present article surveys available research in this area. Questions of compatibility of conditional specifications are addressed as are characterizations of distributions based on their having conditional distributions that are members of prescribed parametric families of distributions. The topics of compatibility and near compatibility of conditional distributions are discussed. Estimation strategies for conditionally specified distributions are summarized. Additionally, certain conditionally specified densities are shown to provide convenient flexible conjugate prior families in certain multi- parameter Bayesian settings.
Factor analysis and its extensions are widely used in the social and behavioral sciences, and can be considered useful tools for exploration and model fitting in multivariate analysis. Despite its popularity in applications, factor analysis has attracted rather limited attention from statisticians. Three issues, identification ambiguity, heavy reliance on normality, and limitation to linearity, may have contributed to statisticians' lack of interest in factor analysis. In this paper, the statistical contributions to the first two issues are reviewed, and the third issue is addressed in detail. Linear models can be unrealistic even as an approximation in many applications, and often do not fit the data well without increasing the number of factors beyond the level explainable by the subject-matter theory. As an exploratory model, the conventional factor analysis model fails to address nonlinear structure underlying multivariate data. It is argued here that factor analysis does not need to be restricted to linearity and that nonlinear factor analysis can be formulated and carried out as a useful statistical method. In particular, for a general parametric nonlinear factor analysis model, the errors- in-variables parameterization is suggested as a sensible way to formulate the model, and two procedures for model fitting are introduced and described. Tests for the goodness-of-fit of the model are also proposed. The procedures are studied through a simulation study. An example from personality testing is used to illustrate the issues and the methods.
Ramanathan Gnanadesikan was born on November 2, 1932 in Madras, India. He received his B.Sc. (Hons.) and M.A. degrees in 1952 and 1953 from the University of Madras and also studied at the Indian Statistical Institute during those same two years. In 1953, he came to the United States to pursue a doctorate degree in statistics at the University of North Carolina in Chapel Hill. He studied with Professor S. N. Roy and received his degree in 1957. Then he began a 34-year industrial career at Procter & Gamble, Bell Laboratories and Bellcore (now Telcordia Technologies). His time in industry was interspersed with teaching assignments at the Courant Institute, Princeton University and Imperial College. He served as professor of statistics at Rutgers University from 1991 until his retirement in 1998. In 1965, Ram married his statistician wife, Mrudulla, who is well known for her work in statistical education. They have two sons, Anand, a researcher in oceanography, and Mukund, a physician specializing in childhood psychiatry. Ram is a Fellow of the Institute of Mathematical Statistics, the American Statistical Association and the American Association for the Advancement of Science and an elected member of the International Statistical Institute. He was elected to the Order of the Golden Fleece for leadership while a student at the University of North Carolina in 1957, honored by the Association of Indians in America in 1989 for his contributions to advance information technologies and their impact on the communications industry in the United States, and singled out by the State of New Jersey Senate for unique contributions to arts and letters and to greater understanding between the people of India and America in 1989.