• Bernoulli
  • Volume 25, Number 3 (2019), 2075-2106.

Regularization, sparse recovery, and median-of-means tournaments

Gábor Lugosi and Shahar Mendelson

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


We introduce a regularized risk minimization procedure for regression function estimation. The procedure is based on median-of-means tournaments, introduced by the authors in Lugosi and Mendelson (2018) and achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables. It outperforms standard regularized empirical risk minimization procedures such as LASSO or SLOPE in heavy-tailed problems.

Article information

Bernoulli, Volume 25, Number 3 (2019), 2075-2106.

Received: November 2017
Revised: April 2018
First available in Project Euclid: 12 June 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

LASSO median-of-means tournament regularized risk minimization robust regression SLOPE


Lugosi, Gábor; Mendelson, Shahar. Regularization, sparse recovery, and median-of-means tournaments. Bernoulli 25 (2019), no. 3, 2075--2106. doi:10.3150/18-BEJ1046.

Export citation


  • [1] Audibert, J.-Y. and Catoni, O. (2011). Robust linear least squares regression. Ann. Statist. 39 2766–2794.
  • [2] Bellec, P., Lecué, G. and Tsybakov, A. (2016). Slope meets lasso: Improved oracle bounds and optimality. Preprint. Available at arXiv:1605.08651.
  • [3] Brownlees, C., Joly, E. and Lugosi, G. (2015). Empirical risk minimization for heavy-tailed losses. Ann. Statist. 43 2507–2536.
  • [4] Goldenshluger, A. and Nemirovski, A. (1997). On spatially adaptive estimation of nonparametric regression. Math. Methods Statist. 6 135–170.
  • [5] Hsu, D. and Sabato, S. (2013). Approximate loss minimization with heavy tails. Computing Research Repository. abs/1307.1827.
  • [6] Lecué, G. and Lerasle, M. (2017). Learning from MOM’s principles. Preprint. Available at arXiv:1701.01961.
  • [7] Lecué, G. and Mendelson, S. (2017). Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (JEMS) 19 881–904.
  • [8] Lecué, G. and Mendelson, S. (2018). Learning subgaussian classes: Upper and minimax bounds. In Topics in Learning Theory (S. Boucheron and N. Vayatis, eds.) Societe Mathematique de France. To appear.
  • [9] Lecué, G. and Mendelson, S. (2018). Regularization and the small-ball method I: Sparse recovery. Ann. Statist. 46 611–641.
  • [10] Lerasle, M. and Oliveira, R.I. (2012). Robust empirical mean estimators. Unpublished manuscript.
  • [11] Lugosi, G. and Mendelson, S. (2018). Risk minimization by median-of-means tournaments. J. Eur. Math. Soc. (JEMS). To appear.
  • [12] Lugosi, G. and Mendelson, S. (2018). Sub-Gaussian estimators of the mean of a random vector. Ann. Statist. To appear.
  • [13] Mendelson, S. (2015). Learning without concentration. J. ACM 62 Art. 21, 25.
  • [14] Mendelson, S. (2017). “Local” vs. “global” parameters – breaking the Gaussian complexity barrier. Ann. Statist. 45 1835–1862.
  • [15] Mendelson, S. (2017). On aggregation for heavy-tailed classes. Probab. Theory Related Fields 168 641–674.
  • [16] Mendelson, S. (2017). On multiplier processes under weak moment assumptions. In Geometric Aspects of Functional Analysis. Lecture Notes in Math. 2169 301–318. Springer, Cham.
  • [17] Mendelson, S. (2017). An optimal unrestricted learning procedure. Preprint. Available at arXiv:1707.05342v2.
  • [18] Minsker, S. (2015). Geometric median and robust estimation in Banach spaces. Bernoulli 21 2308–2335.
  • [19] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.