The Annals of Statistics

The maximum likelihood prior

J. A. Hartigan

Full-text: Open access

Abstract

Consider an estimate $\theta^*$ of a parameter $\theta$ based on repeated observations from a family of densities $f_\theta$ evaluated by the Kullback–Leibler loss function $K(\theta, \theta^*) = \int \log(f_\theta/f_{\theta^*})f_\theta$. The maximum likelihood prior density, if it exists, is the density for which the corresponding Bayes estimate is asymptotically negligibly different from the maximum likelihood estimate. The Bayes estimate corresponding to the maximum likelihood prior is identical to maximum likelihood for exponential families of densities. In predicting the next observation, the maximum likelihood prior produces a predictive distribution that is asymptotically at least as close, in expected truncated Kullback–Leibler distance, to the true density as the density indexed by the maximum likelihood estimate. It frequently happens in more than one dimension that maximum likelihood corresponds to no prior density, and in that case the maximum likelihood estimate is asymptotically inadmissible and may be improved upon by using the estimate corresponding to a least favorable prior. As in Brown, the asymptotic risk for an arbitrary estimate “near” maximum likelihood is given by an expression involving derivatives of the estimator and of the information matrix. Admissibility questions for these “near ML” estimates are determined by the existence of solutions to certain differential equations.

Article information

Source
Ann. Statist., Volume 26, Number 6 (1998), 2083-2103.

Dates
First available in Project Euclid: 21 June 2002

Permanent link to this document
https://projecteuclid.org/euclid.aos/1024691462

Digital Object Identifier
doi:10.1214/aos/1024691462

Mathematical Reviews number (MathSciNet)
MR1700222

Zentralblatt MATH identifier
0927.62023

Subjects
Primary: 62F15
Secondary: 62C15

Keywords
Uninformative priors maximum likelihood Kullback–Leibler distance the Jeffreys prior asymptotic admissibility

Citation

Hartigan, J. A. The maximum likelihood prior. Ann. Statist. 26 (1998), no. 6, 2083--2103. doi:10.1214/aos/1024691462. https://projecteuclid.org/euclid.aos/1024691462


Export citation

References

  • AITCHISON, J. 1975. Goodness of prediction. Biometrika 62 547 554. Z.
  • AMARI, S. 1982. Differential geometry of curved exponential families: curvatures and information loss. Ann. Statist. 10 357 387. Z. Z.
  • BERNARDO, J. M. 1979. Reference posterior densities for Bayesian inference with discussion. J. Roy. Statist. Soc. Ser. B 41 113 147. Z.
  • BERGER, J. O. and BERNARDO, J. M. 1992. On the development of the reference prior method. In Z Bayesian Statistics 4. Proceedings of the Fourth Valencia International Meeting J. M.. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds. 35 60. Clarendon Press, Oxford. Z.
  • BROWN, L. D. 1971. Admissible estimators, recurrent diffusions, and insoluble boundary value problems. Ann. Math. Statist. 42 855 903. Z.
  • BROWN, L. D. 1979. A heuristic method for determining admissibility of estimators with applications. Ann. Statist. 7 960 994. Z.
  • CLARKE, B. and BARRON, A. 1994. Jeffrey s prior is asy mptotically least favourable under entropy risk. J. Statist. Plann. Inference 41 37 60. Z.
  • GHOSH, J. K. 1994. Higher Order Asy mptotics. IMS, Hay ward, CA. Z.
  • GHOSH, J. K. and SUBRAMANy AM, K. 1974. Second-order efficiency of maximum likelihood estimators. Sankhy a Ser. A 36 325 358. Z.
  • HARTIGAN, J. A. 1964. Invariant prior densities. Ann. Math. Statist. 35 836 845. Z.
  • HARTIGAN, J. A. 1965. The asy mptotically unbiased density. Ann. Math. Statist. 36 1137 1152. Z.
  • HARTIGAN, J. A. 1983. Bay es Theory. Springer, New York. Z.
  • JEFFREy S, H. 1946. An invariant form of the prior probability in estimation problems. Proc. Roy. Soc. London Ser. A 186 453 461. Z.
  • JEFFREy S, H. 1961. Theory of Probability. Oxford Univ. Press. Z.
  • JOHNSON, R. A. 1967. An asy mptotic expansion for posterior distributions. Ann. Math. Statist. 38 1899 1907. Z.
  • KOMAKI, F. 1996. On asy mptotic properties of predictive distributions. Biometrika 83 229 313. Z.
  • LEVIT, B. YA. 1982. Minimax estimation and positive solutions of elliptic equations. Theory Probab. Appl. 27 563 586. Z.
  • LEVIT, B. YA. 1983. Second-order availability and positive solutions of the Schrodinger equa¨ tion. Lecture Notes in Math. 1021 372 385. Springer, Berlin. Z.
  • LEVIT, B. YA. 1985. Second-order asy mptotic optimality and positive solutions of Schrodinger's ¨ equation. Theory Probab. Appl. 30 333 363. Z.
  • MCCULLAGH, P. 1987. Tensor Methods in Statistics. Chapman Hall, London. Z.
  • PERKS, W. 1947. Some observations on inverse probability, including a new indifference rule. J. Inst. Actuaries 73 285 334.Z.
  • PFANZAGL, J. and WEFELMEy ER, W. 1978. A third-order optimal property of the maximum likelihood estimator. J. Multivariate Anal. 8 1 29. Z.
  • RAO, C. R. 1961. Asy mptotic efficiency and limiting information. Proc. Fourth Berkeley Sy mp. Math. Statist. Probab. 1 531 546. Univ. California Press, Berkeley.
  • STEIN, C. 1956. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Proc. Third Berkeley Sy mp. Math. Statist. Probab. 1 197 206. Univ. California Press, Berkeley. Z.
  • STRASSEN, H. 1977. Asy mptotic expansions for Bay es procedures. In Recent Developments in Z. Statistics Barra, J. L., eds. 9 35. North-Holland, Amsterdam. Z.
  • TAy LOR, M. E. 1997. Partial Differential Equations III. Nonlinear Equations. Springer, New York. Z.
  • WELCH, B. L. and PEERS, H. W. 1963. On formulae for confidence points based on integrals of weighted likelihoods. J. Roy. Statist. Soc. Ser. B 25 318 329.
  • NEW HAVEN, CONNECTICUT 06520