Open Access
January, 1973 Convergence of Estimates Under Dimensionality Restrictions
L. LeCam
Ann. Statist. 1(1): 38-53 (January, 1973). DOI: 10.1214/aos/1193342380

Abstract

Consider independent identically distributed observations whose distribution depends on a parameter $\theta$. Measure the distance between two parameter points $\theta_1, \theta_2$ by the Hellinger distance $h(\theta_1, \theta_2)$. Suppose that for $n$ observations there is a good but not perfect test of $\theta_0$ against $\theta_n$. Then $n^{\frac{1}{2}}h(\theta_0, \theta_n)$ stays away from zero and infinity. The usual parametric examples, regular or irregular, also have the property that there are estimates $\hat{\theta}_n$ such that $n^{\frac{1}{2}}h(\hat{\theta}_n, \theta_0)$ stays bounded in probability, so that rates of separation for tests and estimates are essentially the same. The present paper shows that need not be true in general but is correct under certain metric dimensionality assumptions on the parameter set. It is then shown that these assumptions imply convergence at the required rate of the Bayes estimates or maximum probability estimates.

Citation

Download Citation

L. LeCam. "Convergence of Estimates Under Dimensionality Restrictions." Ann. Statist. 1 (1) 38 - 53, January, 1973. https://doi.org/10.1214/aos/1193342380

Information

Published: January, 1973
First available in Project Euclid: 25 October 2007

zbMATH: 0255.62006
MathSciNet: MR334381
Digital Object Identifier: 10.1214/aos/1193342380

Keywords: Bayes estimates , maximum probability estimates , rate of convergence

Rights: Copyright © 1973 Institute of Mathematical Statistics

Vol.1 • No. 1 • January, 1973
Back to Top