Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
The “numerical method” in medicine goes back to Pierre Louis’ 1835 study of pneumonia and John Snow’s 1855 book on the epidemiology of cholera. Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule's 1899 study on the causes of poverty. In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based.
Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,… hold, then H can be tested against the data. However, if A, B, C,… remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association.
In a randomized experiment, the investigator creates a clear and relatively unambiguous comparison of treatment groups by exerting tight control over the assignment of treatments to experimental subjects, ensuring that comparable subjects receive alternative treatments. In an observational study, the investigator lacks control of treatment assignments and must seek a clear comparison in other ways. Care in the choice of circumstances in which the study is conducted can greatly influence the quality of the evidence about treatment effects. This is illustrated in detail using three observational studies that use choice effectively, one each from economics, clinical psychology and epidemiology. Other studies are discussed more briefly to illustrate specific points. The design choices include (i) the choice of research hypothesis, (ii) the choice of treated and control groups, (iii) the explicit use of competing theories, rather than merely null and alternative hypotheses, (iv) the use of internal replication in the form of multiple manipulations of a single dose of treatment, (v) the use of undelivered doses in control groups, (vi) design choices to minimize the need for stability analyses, (vii) the duration of treatment and (viii) the use of natural blocks.
This paper examines the decision problems associated with measurement and remediation of environmental hazards, using the example of indoor radon (a carcinogen) as a case study. Innovative methods developed here include (1) the use of results from a previous hierarchical statistical analysis to obtain probability distributions with local variation in both predictions and uncertainties, (2) graphical methods to display the aggregate consequences of decisions by individuals and (3) alternative parameterizations for individual variation in the dollar value of a given reduction in risk. We perform costbenefit analyses for a variety of decision strategies, as a function of home types and geography, so that measurement and remediation can be recommended where it is most effective. We also briefly discuss the sensitivity of policy recommendations and outcomes to uncertainty in inputs. For the home radon example, we estimate that if the recommended decision rule were applied to all houses in the United States, it would be possible to save the same number of lives as with the current official recommendations for about 40% less cost.
Lincoln E. Moses was born on December 21, 1921 in Kansas City, Missouri. He attended San Bernardino Valley Junior College from 1937 to 1939 and earned an AA degree, earned an A.B. in Social Sciences from Stanford University in 1941 and a Ph.D. in Statistics from Stanford University in 1950. He was Assistant Professor of Education at Teacher’s College, Columbia University (1950–1952), Assistant Professor of Statistics in the Department of Statistics and the Department of Preventive Medicine, Stanford University (1952–1955), Associate professor in those departments from 1955 to 1959, and Professor of Statistics in the Department of Statistics and the Department of Research and Health Policy, Stanford University from 1959 until his retirement in 1992. He is now Professor Emeritus. He was Executive Head of the Department of Statis tics at Stanford from 1964 to 1968. He served as Associate Dean, Humanities and Sciences, Stanford University (1965–1968 and 1985–1986) and Dean of Graduate Studies, Stanford University, 1969–1975. He was Administrator, Energy Information Administration, Department of Energy, 1978–1980 after being appointed by President Carter in 1977. His many recognitions and honors include being Fellow, John Simon Guggenheim Memorial Foundation, 1960–1961, L. L. Thurstone Distinguished Fellow, University of North Carolina, 1968–1969, Fellow, Center for Advanced Study in the Behavioral Sciences, 1975–1976. He is a Fellow of the Institute of Mathematical Statistics, a Fellow of the American Statistical Association, an elected member of the International Statistical Institute, a Fellow of the American Association for the Advancement of Science, a Fellow of the American Academy of Arts and Sciences, a member of Phi Beta Kappa and a member of the Institute of Medicine. In 1980 he received the Distinguished Service Medal of the U.S. Department of Energy.