Open Access
August 2010 On the Sample Information About Parameter and Prediction
Nader Ebrahimi, Ehsan S. Soofi, Refik Soyer
Statist. Sci. 25(3): 348-367 (August 2010). DOI: 10.1214/10-STS329

Abstract

The Bayesian measure of sample information about the parameter, known as Lindley’s measure, is widely used in various problems such as developing prior distributions, models for the likelihood functions and optimal designs. The predictive information is defined similarly and used for model selection and optimal designs, though to a lesser extent. The parameter and predictive information measures are proper utility functions and have been also used in combination. Yet the relationship between the two measures and the effects of conditional dependence between the observable quantities on the Bayesian information measures remain unexplored. We address both issues. The relationship between the two information measures is explored through the information provided by the sample about the parameter and prediction jointly. The role of dependence is explored along with the interplay between the information measures, prior and sampling design. For the conditionally independent sequence of observable quantities, decompositions of the joint information characterize Lindley’s measure as the sample information about the parameter and prediction jointly and the predictive information as part of it. For the conditionally dependent case, the joint information about parameter and prediction exceeds Lindley’s measure by an amount due to the dependence. More specific results are shown for the normal linear models and a broad subfamily of the exponential family. Conditionally independent samples provide relatively little information for prediction, and the gap between the parameter and predictive information measures grows rapidly with the sample size. Three dependence structures are studied: the intraclass (IC) and serially correlated (SC) normal models, and order statistics. For IC and SC models, the information about the mean parameter decreases and the predictive information increases with the correlation, but the joint information is not monotone and has a unique minimum. Compensation of the loss of parameter information due to dependence requires larger samples. For the order statistics, the joint information exceeds Lindley’s measure by an amount which does not depend on the prior or the model for the data, but it is not monotone in the sample size and has a unique maximum.

Citation

Download Citation

Nader Ebrahimi. Ehsan S. Soofi. Refik Soyer. "On the Sample Information About Parameter and Prediction." Statist. Sci. 25 (3) 348 - 367, August 2010. https://doi.org/10.1214/10-STS329

Information

Published: August 2010
First available in Project Euclid: 4 January 2011

zbMATH: 1329.62046
MathSciNet: MR2791672
Digital Object Identifier: 10.1214/10-STS329

Keywords: Bayesian predictive distribution , Entropy , intraclass correlation , mutual information , optimal design , order statistics , reference prior , serial correlation

Rights: Copyright © 2010 Institute of Mathematical Statistics

Vol.25 • No. 3 • August 2010
Back to Top