Abstract
This paper is concerned with estimating a predictive density under integrated absolute error ($L_{1}$) loss. Based on a spherically symmetric observable $X\sim p_{X}(\Vert x-\mu\Vert^{2})$, $x,\mu \in \mathbb{R}^{d}$, we seek to estimate the (unimodal) density of $Y\sim q_{Y}(\Vert y-\mu \Vert^{2})$, $y\in \mathbb{R}^{d}$. We focus on the benchmark (and maximum likelihood for unimodal $p$) plug-in density estimator $q_{Y}(\Vert y-X\Vert^{2})$ and, for $d\geq 4$, we establish its inadmissibility, as well as provide plug-in density improvements, as measured by the frequentist risk taken with respect to $X$. Sharper results are obtained for the subclass of scale mixtures of normal distributions which include the normal case. The findings rely on the duality between the predictive density estimation problem with a point estimation problem of estimating $\mu$ under a loss which is a concave function of $\Vert \hat{\mu}-\mu\Vert^{2}$, Stein estimation results and techniques applicable to such losses, and further properties specific to scale mixtures of normal distributions. Finally, (i) we address univariate implications for cases where there exist parametric restrictions on $\mu$, and (ii) we show quite generally for logconcave $q_{Y}$ that improvements on the benchmark mle can always be found among the scale expanded predictive densities $\frac{1}{c}q_{Y}(\frac{(y-x)^{2}}{c^{2}})$, with $c-1$ positive but not too large.
Citation
Tatsuya Kubokawa. Éric Marchand. William E. Strawderman. "On predictive density estimation for location families under integrated absolute error loss." Bernoulli 23 (4B) 3197 - 3212, November 2017. https://doi.org/10.3150/16-BEJ842
Information