The Annals of Statistics

Convergence Rate of Sieve Estimates

Xiaotong Shen and Wing Hung Wong

Full-text: Open access


In this paper, we develop a general theory for the convergence rate of sieve estimates, maximum likelihood estimates (MLE's) and related estimates obtained by optimizing certain empirical criteria in general parameter spaces. In many cases, especially when the parameter space is infinite dimensional, maximization over the whole parameter space is undesirable. In such cases, one has to perform maximization over an approximating space (sieve) of the original parameter space and allow the size of the approximating space to grow as the sample size increases. This method is called the method of sieves. In the case of the maximum likelihood estimation, an MLE based on a sieve is called a sieve MLE. We found that the convergence rate of a sieve estimate is governed by (a) the local expected values, variances and $L_2$ entropy of the criterion differences and (b) the approximation error of the sieve. A robust nonparametric regression problem, a mixture problem and a nonparametric regression problem are discussed as illustrations of the theory. We also found that when the underlying space is too large, the estimate based on optimizing over the whole parameter space may not achieve the best possible rates of convergence, whereas the sieve estimate typically does not suffer from this difficulty.

Article information

Ann. Statist., Volume 22, Number 2 (1994), 580-615.

First available in Project Euclid: 11 April 2007

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier


Primary: 62A10
Secondary: 62F12: Asymptotic properties of estimators 62G20: Asymptotic properties

Convergence rate maximum likelihood and related estimates method of sieves metric entropy function


Shen, Xiaotong; Wong, Wing Hung. Convergence Rate of Sieve Estimates. Ann. Statist. 22 (1994), no. 2, 580--615. doi:10.1214/aos/1176325486.

Export citation