The Annals of Statistics
- Ann. Statist.
- Volume 32, Number 4 (2004), 1367-1433.
Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
Peter D. Grünwald and A. Philip Dawid
Abstract
We describe and develop a close relationship between two problems that have customarily been regarded as distinct: that of maximizing entropy, and that of minimizing worst-case expected loss. Using a formulation grounded in the equilibrium theory of zero-sum games between Decision Maker and Nature, these two problems are shown to be dual to each other, the solution to each providing that to the other. Although Topsøe described this connection for the Shannon entropy over 20 years ago, it does not appear to be widely known even in that important special case.
We here generalize this theory to apply to arbitrary decision problems and loss functions. We indicate how an appropriate generalized definition of entropy can be associated with such a problem, and we show that, subject to certain regularity conditions, the above-mentioned duality continues to apply in this extended context. This simultaneously provides a possible rationale for maximizing entropy and a tool for finding robust Bayes acts. We also describe the essential identity between the problem of maximizing entropy and that of minimizing a related discrepancy or divergence between distributions. This leads to an extension, to arbitrary discrepancies, of a well-known minimax theorem for the case of Kullback–Leibler divergence (the “redundancy-capacity theorem” of information theory).
For the important case of families of distributions having certain mean values specified, we develop simple sufficient conditions and methods for identifying the desired solutions. We use this theory to introduce a new concept of “generalized exponential family” linked to the specific decision problem under consideration, and we demonstrate that this shares many of the properties of standard exponential families.
Finally, we show that the existence of an equilibrium in our game can be rephrased in terms of a “Pythagorean property” of the related divergence, thus generalizing previously announced results for Kullback–Leibler and Bregman divergences.
Article information
Source
Ann. Statist. Volume 32, Number 4 (2004), 1367-1433.
Dates
First available in Project Euclid: 4 August 2004
Permanent link to this document
http://projecteuclid.org/euclid.aos/1091626173
Digital Object Identifier
doi:10.1214/009053604000000553
Mathematical Reviews number (MathSciNet)
MR2089128
Zentralblatt MATH identifier
1048.62008
Subjects
Primary: 62C20: Minimax procedures
Secondary: 94A17: Measures of information, entropy
Keywords
Additive model Bayes act Bregman divergence Brier score convexity duality equalizer rule exponential family Gamma-minimax generalized exponential family Kullback–Leibler divergence logarithmic score maximin mean-value constraints minimax mutual information Pythagorean property redundancy-capacity theorem relative entropy saddle-point scoring rule specific entropy uncertainty function zero–one loss
Citation
Grünwald, Peter D.; Dawid, A. Philip. Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory. Ann. Statist. 32 (2004), no. 4, 1367--1433. doi:10.1214/009053604000000553. http://projecteuclid.org/euclid.aos/1091626173.

