Open Access
August 2008 Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies
Dan L. Nicolae, Xiao-Li Meng, Augustine Kong
Statist. Sci. 23(3): 287-312 (August 2008). DOI: 10.1214/07-STS244

Abstract

Many practical studies rely on hypothesis testing procedures applied to data sets with missing information. An important part of the analysis is to determine the impact of the missing data on the performance of the test, and this can be done by properly quantifying the relative (to complete data) amount of available information. The problem is directly motivated by applications to studies, such as linkage analyses and haplotype-based association projects, designed to identify genetic contributions to complex diseases. In the genetic studies the relative information measures are needed for the experimental design, technology comparison, interpretation of the data, and for understanding the behavior of some of the inference tools. The central difficulties in constructing such information measures arise from the multiple, and sometimes conflicting, aims in practice. For large samples, we show that a satisfactory, likelihood-based general solution exists by using appropriate forms of the relative Kullback–Leibler information, and that the proposed measures are computationally inexpensive given the maximized likelihoods with the observed data. Two measures are introduced, under the null and alternative hypothesis respectively. We exemplify the measures on data coming from mapping studies on the inflammatory bowel disease and diabetes. For small-sample problems, which appear rather frequently in practice and sometimes in disguised forms (e.g., measuring individual contributions to a large study), the robust Bayesian approach holds great promise, though the choice of a general-purpose “default prior” is a very challenging problem. We also report several intriguing connections encountered in our investigation, such as the connection with the fundamental identity for the EM algorithm, the connection with the second CR (Chapman–Robbins) lower information bound, the connection with entropy, and connections between likelihood ratios and Bayes factors. We hope that these seemingly unrelated connections, as well as our specific proposals, will stimulate a general discussion and research in this theoretically fascinating and practically needed area.

Citation

Download Citation

Dan L. Nicolae. Xiao-Li Meng. Augustine Kong. "Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies." Statist. Sci. 23 (3) 287 - 312, August 2008. https://doi.org/10.1214/07-STS244

Information

Published: August 2008
First available in Project Euclid: 28 January 2009

zbMATH: 1329.62092
MathSciNet: MR2483902
Digital Object Identifier: 10.1214/07-STS244

Keywords: Cox regression , EM algorithm , Entropy , Fisher information , genetic linkage studies , haplotype-based association studies , Kullback–Leibler information , noninformative prior , partial likelihood , relative information

Rights: Copyright © 2008 Institute of Mathematical Statistics

Vol.23 • No. 3 • August 2008
Back to Top