Abstract
Consider an estimate $\theta^*$ of a parameter $\theta$ based on repeated observations from a family of densities $f_\theta$ evaluated by the Kullback–Leibler loss function $K(\theta, \theta^*) = \int \log(f_\theta/f_{\theta^*})f_\theta$. The maximum likelihood prior density, if it exists, is the density for which the corresponding Bayes estimate is asymptotically negligibly different from the maximum likelihood estimate. The Bayes estimate corresponding to the maximum likelihood prior is identical to maximum likelihood for exponential families of densities. In predicting the next observation, the maximum likelihood prior produces a predictive distribution that is asymptotically at least as close, in expected truncated Kullback–Leibler distance, to the true density as the density indexed by the maximum likelihood estimate. It frequently happens in more than one dimension that maximum likelihood corresponds to no prior density, and in that case the maximum likelihood estimate is asymptotically inadmissible and may be improved upon by using the estimate corresponding to a least favorable prior. As in Brown, the asymptotic risk for an arbitrary estimate “near” maximum likelihood is given by an expression involving derivatives of the estimator and of the information matrix. Admissibility questions for these “near ML” estimates are determined by the existence of solutions to certain differential equations.
Citation
J. A. Hartigan. "The maximum likelihood prior." Ann. Statist. 26 (6) 2083 - 2103, December 1998. https://doi.org/10.1214/aos/1024691462
Information