Abstract
In the framework of an abstract statistical model, we discuss how to use the solution of one estimation problem (Problem A) in order to construct an estimator in another, completely different, Problem B. As a solution of Problem A we understand a data-driven selection from a given family of estimators $\mathbf{A}(\mathfrak{H})=\{\widehat{A}_{\mathfrak{h}},\mathfrak{h}\in\mathfrak{H}\}$ and establishing for the selected estimator so-called oracle inequality. If $\hat{\mathfrak{h}}\in\mathfrak{H}$ is the selected parameter and $\mathbf{B}(\mathfrak{H})=\{\widehat{B}_{\mathfrak{h}},\mathfrak{h}\in\mathfrak{H}\}$ is an estimator’s collection built in Problem B, we suggest to use the estimator $\widehat{B}_{\hat{\mathfrak{h}}}$. We present very general selection rule led to selector $\hat{\mathfrak{h}}$ and find conditions under which the estimator $\widehat{B}_{\hat{\mathfrak{h}}}$ is reasonable. Our approach is illustrated by several examples related to adaptive estimation.
Citation
O.V. Lepski. "A new approach to estimator selection." Bernoulli 24 (4A) 2776 - 2810, November 2018. https://doi.org/10.3150/17-BEJ945
Information