The Annals of Statistics

Estimation of distributions, moments and quantiles in deconvolution problems

Peter Hall and Soumendra N. Lahiri

Full-text: Open access


When using the bootstrap in the presence of measurement error, we must first estimate the target distribution function; we cannot directly resample, since we do not have a sample from the target. These and other considerations motivate the development of estimators of distributions, and of related quantities such as moments and quantiles, in errors-in-variables settings. We show that such estimators have curious and unexpected properties. For example, if the distributions of the variable of interest, W, say, and of the observation error are both centered at zero, then the rate of convergence of an estimator of the distribution function of W can be slower at the origin than away from the origin. This is an intrinsic characteristic of the problem, not a quirk of particular estimators; the property holds true for optimal estimators.

Article information

Ann. Statist., Volume 36, Number 5 (2008), 2110-2134.

First available in Project Euclid: 13 October 2008

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62G20: Asymptotic properties
Secondary: 62C20: Minimax procedures

Bandwidth errors in variables ill-posed problem kernel methods measurement error minimax optimal convergence rate smoothing regularization


Hall, Peter; Lahiri, Soumendra N. Estimation of distributions, moments and quantiles in deconvolution problems. Ann. Statist. 36 (2008), no. 5, 2110--2134. doi:10.1214/07-AOS534.

Export citation


  • Booth, J. G. and Hall, P. (1993). Bootstrap confidence regions for functional relationships in errors-in-variables models. Ann. Statist. 21 1780–1791.
  • Butucea, C. (2004). Deconvolution of supersmooth densities with smooth noise. Canad. J. Statist. 32 181–192.
  • Butucea, C. and Tsybakov, A. B. (2008). Sharp optimality for density deconvolution with dominating bias. Theory Probab. Appl. To appear.
  • Carroll, R.J. and Hall, P. (1988). Optimal rates of convergence for deconvolving a density. J. Amer. Statist. Assoc. 83 1184–1186.
  • Cordy, C. and Thomas, D. R. (1997). Deconvolution of a distribution function. J. Amer. Statist. Assoc. 92 1459–1465.
  • Cui, H. (2005). Asymptotics of mean transformation estimators with errors in variables model. J. Syst. Sci. Complex. 18 446–455.
  • Delaigle, A. and Gijbels, I. (2002). Estimation of integrated squared density derivatives from a contaminated sample. J. Roy. Statist. Soc. Ser. B 64 869–886.
  • Delaigle, A. and Gijbels, I. (2004a). Practical bandwidth selection in deconvolution kernel density estimation. Comput. Statist. Data Anal. 45 249–267.
  • Delaigle, A. and Gijbels, I. (2004b). Bootstrap bandwidth selection in kernel density estimation from a contaminated sample. Ann. Inst. Statist. Math. 56 19–47.
  • Delaigle, A. and Hall, P. (2006). On optimal kernel choice for deconvolution. Statist. Probab. Lett. 76 1594–1602.
  • Devroye, L. (1989). Consistent deconvolution in density estimation. Canad. J. Statist. 17 235-239.
  • Diggle, P. J. and Hall, P. (1993). A Fourier approach to nonparametric deconvolution of a density estimate. J. Roy. Statist. Soc. Ser. B 55 523–531.
  • Fan, J. (1991a). On the optimal rates of convergence for nonparametric deconvolution problems. Ann. Statist. 19 1257–1272.
  • Fan, J. (1991b). Global behavior of deconvolution kernel estimates. Statist. Sinica 1 541–551.
  • Fan, J. (1993). Adaptively local one-dimensional subproblems with application to a deconvolution problem. Ann. Statist. 21 600–610.
  • Fan, J. and Koo, J.-Y. (2002). Wavelet deconvolution. IEEE Trans. Inform. Theory 48 734–747.
  • Groeneboom, P. and Jongbloed, G. (2003). Density estimation in the uniform deconvolution model. Statist. Neerlandica 57 136–157.
  • Groeneboom, P. and Wellner, J. (1992). Information Bounds and Nonparametric Maximum Likelihood Estimation. Birkhäuser, Basel.
  • Hesse, C. H. (1995). Distribution function estimation from noisy observations. Publ. Inst. Stat. Paris Sud 39 21–35.
  • Hesse, C. H. (1999). Data-driven deconvolution. J. Nonparametr. Statist. 10 343–373.
  • Hesse, C. H. and Meister, A. (2004). Optimal iterative density deconvolution. J. Nonparametr. Statist. 16 879–900.
  • Ioannides, D. A. and Papanastassiou, D. P. (2001). Estimating the distribution function of a stationary process involving measurement errors. Statist. Inference Stoch. Process. 4 181–198.
  • Jongbloed, G. (1998). Exponential deconvolution: Two asymptotically equivalent estimators. Statist. Neerlandica 52 6–17.
  • Koo, J.-A. (1999). Logspline deconvolution in Besov space. Scand. J. Statist. 26 73–86.
  • Neumann, M. H. (1997). On the effect of estimating the error density in nonparametric deconvolution. J. Nonparametr. Statist. 7 307–330.
  • Pensky, M. (2002). Density deconvolution based on wavelets with bounded supports. Statist. Probab. Lett. 56 261–269.
  • Pensky, M. and Vidakovic, B. (1999). Adaptive wavelet estimator for nonparametric density deconvolution. Ann. Statist. 27 2033–2053.
  • Qin, H.-Z. and Feng, S.-Y. (2003). Deconvolution kernel estimator for mean transformation with ordinary smooth error. Statist. Probab. Lett. 61 337–346.
  • Stefanski, L. A. and Carroll, R. J. (1990). Deconvoluting kernel density estimators. Statistics 21 169–184.
  • Stone, C. J. (1982). Optimal global rates of convergence for nonparametric regression. Ann. Statist. 10 1040–1053.
  • van de Geer, S. (1995). Asymptotic normality in mixture models. ESAIM Probab. Statist. 1 17–33.
  • van Es, B., Spreij, P. and van Zanten, H. (2003). Nonparametric volatility density estimation. Bernoulli 9 451–465.
  • Zhang, C. H. (1990). Fourier methods for estimating mixing densities and distributions. Ann. Statist. 18 806–830.