Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact firstname.lastname@example.org with any questions.
The recent report of the epidemiologic study in Woburn, Massachusetts has focussed renewed attention upon the methods used by epidemiologists and other public health professionals in evaluating the health impact of environmental exposures. Much attention has been given to the statistical methods by which the data gathered in epidemiologic studies, both observational and demographic, should be analyzed. Epidemiologic methods have not been accorded as much attention, although the development and validation of such techniques is vital to the progress of environmental epidemiology. An annual meeting at which recent epidemiologic and statistical methodologic advances would be discussed could greatly help the epidemiologic community in quickly assimilating such knowledge. Less emphasis has also been given to the means by which those data are collected during the study. Several approaches to dealing with the problems faced by environmental epidemiologists in collecting data are discussed, such as the development of national population-based disease registries. The use of such national data sets, such as the NHANES and NHDS data bases, are also noted. An audit of the national vital statistics system is suggested, insofar as it can serve as an indicator of sentinel health events. Similar assessments of other national statistics systems, such as those maintained by the Centers for Disease Control, are also needed.
This paper discusses the evolving interdependent relationship between environmental sciences (such as epidemiology) and environmental law and regulation. Societal needs for expert evaluations of the potential hazards of toxic chemicals have tremendously influenced the development of toxicology and epidemiology. In this regard, much recent environmental law reflects its "shotgun wedding" with environmental science; these science-forcing laws require that regulatory agencies take action based on findings that may be at or, very often beyond, the frontiers of environmental science. Recent developments in environmental law and the growth of the animal protection movement have independently contributed to renewed interest in and heightened expectations for the role of epidemiology in developing environmental standards and actions. Those who oppose animal experimentation often argue that data on humans are required to estimate human effects; some recent laws, such as Superfund, mandate consideration of human health assessments as one of the bases for deciding whether and how best to clean up abandoned hazardous waste sites. Requiring epidemiologic confirmation of hazards would make evidence of human harm a prerequisite for regulatory action. Because the animal models and statistical tests on which much environmental regulation now rests are models designed to anticipate human and environmental effects, their statistical validation and development remain crucial to the development and application of environmental law. For the most part, epidemiology is best suited for confirming past risks and not for predicting and preventing future risks.
Briefly sketched is the history of the use of experts' testimony in the courts. Specific rules of federal and state courts have recently made it easier to introduce statisticians' testimony. There are dangers in the free introduction of such testimony. Ways are suggested to assure greater reliability in experts' opinions through improvements in procedure, stronger control by the courts, pressure by outside agencies and substantive law reform.
Animals have provided a surrogate for the study of human health. This has been particularly important in the definition of the effects of pollutants generated in our society. Electromagnetic fields provide an example of the use of animals as models. A review of the animal model literature provides the following information in response to three basic toxicologic elements in defining whether electromagnetic fields are a hazard: 1. Various scientific committees have determined that, in general, exposure to electromagnetic fields, individually or combined, causes a response in animals. Exposure facilities must be carefully constructed and characterized to ensure that artifacts or environmental factors are not actually the cause of the reported effects. 2. It would appear that various components of the nervous system, some circadian rhythms of the body and the pineal gland are responsive to electromagnetic field exposure. Data for other systems are either negative, contradictory or inconclusive. With the exception of the pineal gland, there is little reliable information on dose response, minimum duration required for the effect and whether the effect is reversible or permanent. 3. there are very little animal data available to reliably conclude that exposure represents a hazardous situation. There are questions of the significance of some of the animal data, such as changes in circadian rhythms or suppression of melatonin production. There are also concerns raised by human epidemiology data not addressed by the animal data base. The statistical community is being approached to consider two questions: (1) Can the large amount of negative data be utilized in a quantifiable risk assessment methodology to provide a reasonably reliable definition of risk? (2) Can data from similar studies be statistically combined resulting in larger experimental groups, reducing variability, and potentially clarifying trends or contradictory results?
The two principal methods of evaluating environmental health risks are observational epidemiologic studies of exposed populations and laboratory studies of the effects of agents on experimental animals. The principal advantages of the laboratory approach are random exposure assignment and precise exposure assessment. Epidemiologic studies have the virtue of studying the species of ultimate interest (humans) under natural exposure conditions. Given these characteristics, a strategy is offered for resolving contradictory results from these approaches. Epidemiologic evidence of a hazard with contradictory laboratory data may indicate confounding of the human data from other risk factors or bias due to systematically erroneous data. Animal studies might have erred in selecting an inappropriate species or exposure conditions. Where animal studies suggest a hazard but human studies do not, the human studies may be in error due to random exposure misclassification. Animal studies may have overstated risks due to high exposure levels or selection of an unusually sensitive species for study. Studies of electromagnetic fields suggest an effect on cancer risk based on epidemiologic data with negative results from the laboratory, possibly reflecting confounding or bias in the human research or poor selection of the species or exposure conditions in the laboratory. Improved coordination across these disciplines would benefit those who must reconcile these lines of research in assessing risks to human populations.
It is difficult to think of a worse example with which to illustrate the state of the art of quantitative risk assessment than the possible risks posed by power frequency electric and magnetic fields. Nothing seems to work. We don't know how to measure dose. We don't know whether "more is worse," let alone the shape of any effects functions. The limits that one can set with bounding analysis are too broad to be of much use. Yet despite all these problems, the science is far better than that available for such widely regulated risks as sulfur air pollution. This paper briefly reviews the subject and summarizes some of the problems and lessons.
A comparison of the carcinogenic potency estimates for many chemicals reveals that different governmental agencies derive and use alternative estimates of a chemical's carcinogenic potency. This paper examines which steps in the process of deriving carcinogenic potency estimates (e.g., high to low dose extrapolation, bioassay choice, data set treatment, etc.) contribute to the differences within and between governmental agencies by comparing the details of the process of potency estimation for four chemicals (ethylene dibromide, polychlorinated biphenyls, tetrachloroethylene and tetrachlorodibenzo-$p$-dioxin). For three of these four chemicals all agencies used similar high to low dose extrapolation models and most of the incompatibility arose from selection and treatment of bioassay results. The comparison suggests that an inverse relationship exists between the potential contribution of a parameter to incompatibility and its actual contribution; the existing incompatibility between agencies represented by existing differences in potency estimates is dwarfed by potential incompatibility; and, some but not all, of the incompatibility can be reduced.
While risk assessment is an element of both regulatory and nonregulatory decision making, the role played by these studies in agency risk management decisions has proven to be limited. Part of the reason for this is the role played by considerations other than risk in the decision making of public agencies. In the risk management process, either formal or informal consideration is given to at least four factors: the feasibility of controlling exposure, the costs of control and economic impacts, the balance of costs and benefits and the importance of the product or agent suspected of causing harm. Part of the reason is uncertainty regarding the findings of risk assessment because of methodologic limits or disagreements among analysts, and the impact of these uncertainties on policymakers. Policymakers come into office with different attitudes about how risk averse government policy should be, and uncertainty in the risk assessment process allows policymakers considerable latitude to interpret the evidence and make decisions consistent with attitudes. Outside groups are actively involved in reviewing and conducting risk assessments, however, and this has both brought additional resources to these tasks and served as a source of external peer review.
Insurance is essential to technologic advance and also serves other important social functions. "Insurance market failures" must therefore be evaluated so that appropriate remedial actions can be taken by private insurers, and in some instances, by government. The recent insurance crisis for companies producing and using hazardous materials is examined, with particular attention given to six factors: new tort liability rules, judicial interpretation of insurance contracts, declining interest rates, reluctant reinsurers, government policies based on "new federalism" concepts and inadequate attention to risk analysis. The importance of improving risk analysis techniques to promote their use by insurers is determined to be the fundamental reform needed to restore the private insurance function.
Markers of genetic damage are being used increasingly to understand and prevent environmentally related human adverse health effects. A major example has been the application of such markers to the prediction of chemical carcinogenicity. Over the past 15 years, hundreds of test systems in microorganisms, cell cultures and animals were devised and applied to this end. In spite of early successes, recent results show a discouragingly low, 60% agreement between the genetic tests and conventional, whole animal, long-term carcinogenicity assays. Corresponding efforts to predict the heritable mutagenicity of chemicals using genetic tests that do not involve heritability have given similar results. New technologic developments for the first time are letting us make such genotoxicity measurements directly in human subjects. Examples include detection of DNA adducts, measurement of somatic mutations and improved cytogenetic methods. There is also the possibility of soon finding sufficiently sensitive methods to estimate heritable mutagenicity as a predictor of damage to future generations. These biologic markers of genotoxicity are useful for estimating human exposure and effect, for identifying toxic environments, for monitoring cancer chemotherapy and for identifying susceptible populations. They offer a major new challenge to epidemiology and public health.
This expository paper surveys a variety of statistical issues pertinent to the design and analysis of studies involving biologic markers of human genotoxic exposure. Examples with cytogenetic and mutagenic end points are presented. One principal theme is the valuable interplay of ideas among statistical analyses for in vitro, in vivo and human assays for genetic toxicity; e.g., statistical analysis of sister chromatid exchanges in vitro is suggestive of an analytical approach to the study of sister chromatid exchanges in humans. Other topics discussed include (i) combining information from a series of studies of a common suspect genotoxic exposure and (ii) the utility of historical control information.
The use of biologic markers (such as DNA adducts) as more proximate measures of effective dose has many potential advantages for risk assessment. We can hope for: Better modeling of dose-response relationships, by avoiding the attribution of high dose pharmacokinetic nonlinearities to the fundamental multiple mutation process of carcinogenesis. Better interspecies projection of doses and risks. Improved evaluation of past doses in epidemiologic studies. Insights into the magnitude and significance of human interindividual variability. Possibly some identification of previously unrecognized genetic hazards. In the very long run, the quantification of rates of post-initiation stages in tumor development by comparative measurements of the prevalence of adducts, premalignant foci/clones and tumors as a function of age in different tissues. Realizing these advantages will require statistically oriented professionals in risk assessment to gain familiarity with more complex, simulation-type modeling approaches with multiple points of comparison between theory and experiment--rather than the straightforward curve-fitting that has built the field to this point. It will also require some precautions to avoid mistakes and misuse of these new kinds of data and related theory.
Epidemiologic studies provide quantitative information about the pathologic role of a single risk factor in large populations, but available biostatistical data are not sufficient to apportion liability when exposure to more than one potential risk factor has occurred. Given this scientific void, some courts, upon a threshold demonstration of negligence, have shifted the burden of proof regarding causation to the defendant--forcing him to prove a negative--that he did not cause the plaintiff's injuries. To the extent biologic markers become a scientifically acceptable and legally reliable means of proving that exposure to a particular risk factor caused a specific disease, judicial decisions regarding disease causation can be made with scientific certainty and without subjunctive reference to the defendant's purported negligence.
An overview of the environmental radon problem is presented, with special emphasis on risk estimation and its attendant uncertainties. Although remediation of radon in an individual house is usually fairly inexpensive, aggregate costs can vary greatly, depending on how many houses are deemed hazardous to health. Picking a danger level in the presence of large uncertainties (approximately an order of magnitude separates the high and low value) is a difficult regulatory decision.
Indoor air quality (IAQ) has recently been a subject of increased concern, because (1) indoor pollutant levels and exposures frequently exceed those encountered outdoors, (2) many new products are being introduced into the indoor environment that provide increased levels of exposure and (3) energy conservation measures that reduce ventilation rates can elevate indoor pollutant concentrations. The indoor pollutant which has attracted the greatest public attention to date is radon. This paper provides information on potential sources of radon, typical indoor levels, the relationship of energy-efficient construction to these levels, the potential health effects from exposures to radon progeny and effective control strategies to mitigate indoor radon levels in residences. In addition, this paper addresses how government and other organizations have responded to concerns about indoor radon exposures.
This article provides a framework for consideration of values in the use of science in the regulatory process. The science in question includes both the assessment of technologic risk and the assessment of technologic options to reduce those risks. The focus of the inquiry is on the role of the scientist and engineer as analyst or assessor. The difficulties in separating facts and values will be addressed by focusing on the central question: what level of evidence is sufficient to trigger a requirement for regulatory action? For the purposes of this article, the regulatory process includes notification of risks to interested parties, control of technologic hazards and compensation for harm caused by technology. The discussion will address the problems in achieving both a fair outcome and a fair process in the regulatory use of science.