- Statist. Sci.
- Volume 23, Number 3 (2008), 287-312.
Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies
Many practical studies rely on hypothesis testing procedures applied to data sets with missing information. An important part of the analysis is to determine the impact of the missing data on the performance of the test, and this can be done by properly quantifying the relative (to complete data) amount of available information. The problem is directly motivated by applications to studies, such as linkage analyses and haplotype-based association projects, designed to identify genetic contributions to complex diseases. In the genetic studies the relative information measures are needed for the experimental design, technology comparison, interpretation of the data, and for understanding the behavior of some of the inference tools. The central difficulties in constructing such information measures arise from the multiple, and sometimes conflicting, aims in practice. For large samples, we show that a satisfactory, likelihood-based general solution exists by using appropriate forms of the relative Kullback–Leibler information, and that the proposed measures are computationally inexpensive given the maximized likelihoods with the observed data. Two measures are introduced, under the null and alternative hypothesis respectively. We exemplify the measures on data coming from mapping studies on the inflammatory bowel disease and diabetes. For small-sample problems, which appear rather frequently in practice and sometimes in disguised forms (e.g., measuring individual contributions to a large study), the robust Bayesian approach holds great promise, though the choice of a general-purpose “default prior” is a very challenging problem. We also report several intriguing connections encountered in our investigation, such as the connection with the fundamental identity for the EM algorithm, the connection with the second CR (Chapman–Robbins) lower information bound, the connection with entropy, and connections between likelihood ratios and Bayes factors. We hope that these seemingly unrelated connections, as well as our specific proposals, will stimulate a general discussion and research in this theoretically fascinating and practically needed area.
Statist. Sci. Volume 23, Number 3 (2008), 287-312.
First available in Project Euclid: 28 January 2009
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
EM algorithm entropy Fisher information genetic linkage studies haplotype-based association studies noninformative prior Kullback–Leibler information relative information Cox regression partial likelihood
Nicolae, Dan L.; Meng, Xiao-Li; Kong, Augustine. Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies. Statist. Sci. 23 (2008), no. 3, 287--312. doi:10.1214/07-STS244. https://projecteuclid.org/euclid.ss/1233153057