Open Access
March 2007 On the measure of the information in a statistical experiment
Josep Ginebra
Bayesian Anal. 2(1): 167-211 (March 2007). DOI: 10.1214/07-BA207

Abstract

Setting aside experimental costs, the choice of an experiment is usually formulated in terms of the maximization of a measure of information, often presented as an optimality design criterion. However, there does not seem to be a universal agreement on what objects can qualify as a valid measure of the information in an experiment. In this article we explicitly state a minimal set of requirements that must be satisfied by all such measures. Under that framework, the measure of the information in an experiment is equivalent to the measure of the variability of its likelihood ratio statistics or which is the same, it is equivalent to the measure of the variability of its posterior to prior ratio statistics and to the measure of the variability of the distribution of the posterior distributions yielded by it. The larger that variability, the more peaked the likelihood functions and posterior distributions that tend to be yielded by the experiment, and the more informative the experiment is. By going through various measures of variability, this paper uncovers the unifying link underlying well known information measures as well as information measures that are not yet recognized as such.

The measure of the information in an experiment is then related to the measure of the information in a given observation from it. In this framework, the choice of experiment based on statistical merit only, is posed as a decision problem where the reward is a likelihood ratio or posterior distribution, the utility function is convex, the utility of the reward is the information observed, and the expected utility is the information in an experiment. Finally, the information in an experiment is linked to the information and to the uncertainty in a probability distribution, and we find that the measure of the information in an experiment is not always interpretable as the uncertainty in the prior minus the expected uncertainty in the posterior.

Citation

Download Citation

Josep Ginebra. "On the measure of the information in a statistical experiment." Bayesian Anal. 2 (1) 167 - 211, March 2007. https://doi.org/10.1214/07-BA207

Information

Published: March 2007
First available in Project Euclid: 22 June 2012

zbMATH: 1331.62056
MathSciNet: MR2289927
Digital Object Identifier: 10.1214/07-BA207

Subjects:
Primary: Database Expansion Item

Keywords: Convex ordering , Design of experiments , divergence measure , Hellinger transform , likelihood ratio , location parameter , Measure of association , measure of diversity , measure of surprise , mutual information , optimal design , posterior to prior ratio , reference prior , stochastic ordering , sufficiency , uncertainty , Utility , value of information

Rights: Copyright © 2007 International Society for Bayesian Analysis

Vol.2 • No. 1 • March 2007
Back to Top