Abstract
Let X|μ∼Np(μ,vxI) and Y|μ∼Np(μ,vyI) be independent p-dimensional multivariate normal vectors with common unknown mean μ. Based on only observing X=x, we consider the problem of obtaining a predictive density p̂(y|x) for Y that is close to p(y|μ) as measured by expected Kullback–Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density p̂U(y|x) under the uniform prior πU(μ)≡1, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is superharmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate p̂U(y|x), including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.
Citation
Edward I. George. Feng Liang. Xinyi Xu. "Improved minimax predictive densities under Kullback–Leibler loss." Ann. Statist. 34 (1) 78 - 91, February 2006. https://doi.org/10.1214/009053606000000155
Information