The Annals of Statistics

Rho-estimators revisited: General theory and applications

Yannick Baraud and Lucien Birgé

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


Following Baraud, Birgé and Sart [Invent. Math. 207 (2017) 425–517], we pursue our attempt to design a robust universal estimator of the joint distribution of $n$ independent (but not necessarily i.i.d.) observations for an Hellinger-type loss. Given such observations with an unknown joint distribution $\mathbf{P}$ and a dominated model $\mathscr{Q}$ for $\mathbf{P}$, we build an estimator $\widehat{\mathbf{P}}$ based on $\mathscr{Q}$ (a $\rho$-estimator) and measure its risk by an Hellinger-type distance. When $\mathbf{P}$ does belong to the model, this risk is bounded by some quantity which relies on the local complexity of the model in a vicinity of $\mathbf{P}$. In most situations, this bound corresponds to the minimax risk over the model (up to a possible logarithmic factor). When $\mathbf{P}$ does not belong to the model, its risk involves an additional bias term proportional to the distance between $\mathbf{P}$ and $\mathscr{Q}$, whatever the true distribution $\mathbf{P}$. From this point of view, this new version of $\rho$-estimators improves upon the previous one described in Baraud, Birgé and Sart [Invent. Math. 207 (2017) 425–517] which required that $\mathbf{P}$ be absolutely continuous with respect to some known reference measure. Further additional improvements have been brought as compared to the former construction. In particular, it provides a very general treatment of the regression framework with random design as well as a computationally tractable procedure for aggregating estimators. We also give some conditions for the maximum likelihood estimator to be a $\rho$-estimator. Finally, we consider the situation where the statistician has at her or his disposal many different models and we build a penalized version of the $\rho$-estimator for model selection and adaptation purposes. In the regression setting, this penalized estimator not only allows one to estimate the regression function but also the distribution of the errors.

Article information

Ann. Statist., Volume 46, Number 6B (2018), 3767-3804.

Received: June 2016
Revised: November 2017
First available in Project Euclid: 11 September 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62G35: Robustness 62G05: Estimation 62G07: Density estimation 62G08: Nonparametric regression 62C20: Minimax procedures 62F99: None of the above, but in this section

$\rho$-estimation robust estimation density estimation regression with random design statistical models maximum likelihood estimators metric dimension VC-classes


Baraud, Yannick; Birgé, Lucien. Rho-estimators revisited: General theory and applications. Ann. Statist. 46 (2018), no. 6B, 3767--3804. doi:10.1214/17-AOS1675.

Export citation


  • Audibert, J.-Y. and Catoni, O. (2011). Robust linear least squares regression. Ann. Statist. 39 2766–2794.
  • Baraud, Y. (2016). Bounding the expectation of the supremum of an empirical process over a (weak) VC-major class. Electron. J. Stat. 10 1709–1728.
  • Baraud, Y. and Birgé, L. (2016). Rho-estimators for shape restricted density estimation. Stochastic Process. Appl. 126 3888–3912.
  • Baraud, Y. and Birgé, L. (2018). Supplement to “Rho-estimators revisited: General theory and applications.” DOI:10.1214/17-AOS1675SUPP.
  • Baraud, Y., Birgé, L. and Sart, M. (2017). A new method for estimation and model selection: $\rho$-estimation. Invent. Math. 207 425–517.
  • Birgé, L. (1983). Approximation dans les espaces métriques et théorie de l’estimation. Z. Wahrsch. Verw. Gebiete 65 181–237.
  • Birgé, L. (2006). Model selection via testing: An alternative to (penalized) maximum likelihood estimators. Ann. Inst. Henri Poincaré Probab. Stat. 42 273–325.
  • Birgé, L. and Massart, P. (1998). Minimum contrast estimators on sieves: Exponential bounds and rates of convergence. Bernoulli 4 329–375.
  • Giné, E. and Koltchinskii, V. (2006). Concentration inequalities and asymptotic results for ratio type empirical processes. Ann. Probab. 34 1143–1216.
  • Györfi, L., Kohler, M., Krzyżak, A. and Walk, H. (2002). A Distribution-Free Theory of Nonparametric Regression. Springer, New York.
  • Koltchinskii, V. (2006). Local Rademacher complexities and oracle inequalities in risk minimization. Ann. Statist. 34 2593–2656.
  • Le Cam, L. (1973). Convergence of estimates under dimensionality restrictions. Ann. Statist. 1 38–53.
  • Le Cam, L. (1975). On local and global properties in the theory of asymptotic normality of experiments. In Stochastic Processes and Related Topics (Proc. Summer Res. Inst. Statist. Inference for Stochastic Processes, Indiana Univ., Bloomington, Ind., 1974, Vol. 1; Dedicated to Jerzy Neyman) 13–54. Academic Press, New York.
  • Le Cam, L. (1990). Maximum likelihood: An introduction. Int. Stat. Rev. 58 153–171.
  • Pollard, D. (1984). Convergence of Stochastic Processes. Springer, New York.
  • Sart, M. (2017). Estimating the conditional density by histogram type estimators and model selection. ESAIM, Probab. Stat. 21 34–55.
  • van de Geer, S. A. (2000). Applications of Empirical Process Theory. Cambridge Series in Statistical and Probabilistic Mathematics 6. Cambridge Univ. Press, Cambridge.
  • van der Vaart, A. W. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes. With Applications to Statistics. Springer, New York.

Supplemental materials

  • Supplement to “Rho-estimators revisited: general theory and applications”. This supplement provides the proofs of most results given in the paper and an additional section (D.10) devoted to robust tests.