The Annals of Statistics

Convergence of Estimates Under Dimensionality Restrictions

L. LeCam

Full-text: Open access

Abstract

Consider independent identically distributed observations whose distribution depends on a parameter $\theta$. Measure the distance between two parameter points $\theta_1, \theta_2$ by the Hellinger distance $h(\theta_1, \theta_2)$. Suppose that for $n$ observations there is a good but not perfect test of $\theta_0$ against $\theta_n$. Then $n^{\frac{1}{2}}h(\theta_0, \theta_n)$ stays away from zero and infinity. The usual parametric examples, regular or irregular, also have the property that there are estimates $\hat{\theta}_n$ such that $n^{\frac{1}{2}}h(\hat{\theta}_n, \theta_0)$ stays bounded in probability, so that rates of separation for tests and estimates are essentially the same. The present paper shows that need not be true in general but is correct under certain metric dimensionality assumptions on the parameter set. It is then shown that these assumptions imply convergence at the required rate of the Bayes estimates or maximum probability estimates.

Article information

Source
Ann. Statist., Volume 1, Number 1 (1973), 38-53.

Dates
First available in Project Euclid: 25 October 2007

Permanent link to this document
https://projecteuclid.org/euclid.aos/1193342380

Digital Object Identifier
doi:10.1214/aos/1193342380

Mathematical Reviews number (MathSciNet)
MR334381

Zentralblatt MATH identifier
0255.62006

Keywords
Bayes estimates maximum probability estimates rate of convergence

Citation

LeCam, L. Convergence of Estimates Under Dimensionality Restrictions. Ann. Statist. 1 (1973), no. 1, 38--53. doi:10.1214/aos/1193342380. https://projecteuclid.org/euclid.aos/1193342380


Export citation