Open Access
May 2011 Objective Priors: An Introduction for Frequentists
Malay Ghosh
Statist. Sci. 26(2): 187-202 (May 2011). DOI: 10.1214/10-STS338

Abstract

Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some “objective” or “default” priors, and the resulting posterior inference can still be quite valuable.

Not surprisingly, over the years, the catalog of objective priors also has become prohibitively large, and one has to set some specific criteria for the selection of such priors. Our aim is to review some of these criteria, compare their performance, and illustrate them with some simple examples. While for very large sample sizes, it does not possibly matter what objective prior one uses, the selection of such a prior does influence inference for small or moderate samples. For regular models where asymptotic normality holds, Jeffreys’ general rule prior, the positive square root of the determinant of the Fisher information matrix, enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, however, there are many other priors which emerge as optimal depending on the criterion selected. One new feature in this article is that a prior different from Jeffreys’ is shown to be optimal under the chi-square divergence criterion even in the absence of nuisance parameters. The latter is also invariant under one-to-one reparameterization.

Citation

Download Citation

Malay Ghosh. "Objective Priors: An Introduction for Frequentists." Statist. Sci. 26 (2) 187 - 202, May 2011. https://doi.org/10.1214/10-STS338

Information

Published: May 2011
First available in Project Euclid: 1 August 2011

zbMATH: 1246.62045
MathSciNet: MR2858380
Digital Object Identifier: 10.1214/10-STS338

Keywords: asymptotic expansion , divergence criterion , first-order probability matching , Jeffreys’ prior , left Haar priors , location family , location–scale family , Multiparameter , orthogonality , reference priors , right Haar priors , scale family , second-order probability matching , shrinkage argument

Rights: Copyright © 2011 Institute of Mathematical Statistics

Vol.26 • No. 2 • May 2011
Back to Top