- Bayesian Anal.
- Volume 2, Number 1 (2007), 167-211.
On the measure of the information in a statistical experiment
Setting aside experimental costs, the choice of an experiment is usually formulated in terms of the maximization of a measure of information, often presented as an optimality design criterion. However, there does not seem to be a universal agreement on what objects can qualify as a valid measure of the information in an experiment. In this article we explicitly state a minimal set of requirements that must be satisfied by all such measures. Under that framework, the measure of the information in an experiment is equivalent to the measure of the variability of its likelihood ratio statistics or which is the same, it is equivalent to the measure of the variability of its posterior to prior ratio statistics and to the measure of the variability of the distribution of the posterior distributions yielded by it. The larger that variability, the more peaked the likelihood functions and posterior distributions that tend to be yielded by the experiment, and the more informative the experiment is. By going through various measures of variability, this paper uncovers the unifying link underlying well known information measures as well as information measures that are not yet recognized as such.
The measure of the information in an experiment is then related to the measure of the information in a given observation from it. In this framework, the choice of experiment based on statistical merit only, is posed as a decision problem where the reward is a likelihood ratio or posterior distribution, the utility function is convex, the utility of the reward is the information observed, and the expected utility is the information in an experiment. Finally, the information in an experiment is linked to the information and to the uncertainty in a probability distribution, and we find that the measure of the information in an experiment is not always interpretable as the uncertainty in the prior minus the expected uncertainty in the posterior.
Bayesian Anal., Volume 2, Number 1 (2007), 167-211.
First available in Project Euclid: 22 June 2012
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: Database Expansion Item
Convex ordering design of experiments divergence measure Hellinger transform likelihood ratio measure of association measure of diversity measure of surprise mutual information optimal design posterior to prior ratio reference prior location parameter stochastic ordering sufficiency uncertainty utility value of information
Ginebra, Josep. On the measure of the information in a statistical experiment. Bayesian Anal. 2 (2007), no. 1, 167--211. doi:10.1214/07-BA207. https://projecteuclid.org/euclid.ba/1340390067