Electronic Journal of Statistics

Improved bounds for Square-Root Lasso and Square-Root Slope

Alexis Derumigny

Full-text: Open access

Abstract

Extending the results of Bellec, Lecué and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is $(s/n)\log\left (p/s\right )$, up to some constant, under some mild conditions on the design matrix. Here, $n$ is the sample size, $p$ is the dimension and $s$ is the sparsity parameter. We also prove optimality for the estimation error in the $l_{q}$-norm, with $q\in[1,2]$ for the Square-Root Lasso, and in the $l_{2}$ and sorted $l_{1}$ norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity $s$ of the true parameter. Next, we prove that any estimator depending on $s$ which attains the minimax rate admits an adaptive to $s$ version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.

Article information

Source
Electron. J. Statist., Volume 12, Number 1 (2018), 741-766.

Dates
Received: March 2017
First available in Project Euclid: 27 February 2018

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1519722051

Digital Object Identifier
doi:10.1214/18-EJS1410

Subjects
Primary: 62G08: Nonparametric regression
Secondary: 62C20: Minimax procedures 62G05: Estimation

Keywords
Sparse linear regression minimax rates high-dimensional statistics adaptivity square-root estimators

Rights
Creative Commons Attribution 4.0 International License.

Citation

Derumigny, Alexis. Improved bounds for Square-Root Lasso and Square-Root Slope. Electron. J. Statist. 12 (2018), no. 1, 741--766. doi:10.1214/18-EJS1410. https://projecteuclid.org/euclid.ejs/1519722051


Export citation

References

  • [1] Bellec, P. C., Lecué, G. and Tsybakov, A. B. (2017). Slope meets Lasso: improved oracle bounds and optimality., ArXiv preprint, arXiv:1605.08651v3.
  • [2] Bellec, P. C., Lecué, G. and Tsybakov, A. B. (2017). Towards the study of least squares estimators with convex penalty., Séminaires et Congrès, accepted, to appear.
  • [3] Bellec, P. C. and Tsybakov, A. B. (2017). Bounds on the prediction error of penalized least squares estimators with convex penalty. In, Modern Problems of Stochastic Analysis and Statistics, Selected Contributions In Honor of Valentin Konakov (V. Panov, ed.) Springer.
  • [4] Belloni, A., Chernozhukov, V. and Wang, L. (2011). Square-root lasso: pivotal recovery of sparse signals via conic programming., Biometrika 98 791–806.
  • [5] Belloni, A., Chernozhukov, V. and Wang, L. (2014). Pivotal estimation via square-root lasso in nonparametric regression., Annals of Statistics 42 757–788.
  • [6] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics 37 1705–1732.
  • [7] Bogdan, M., van den Berg, E., Sabatti, C., Su, W. and Candès, E. J. (2015). SLOPE - adaptive variable selection via convex optimization., Annals of Applied Statistics 9 1103.
  • [8] Giraud, C. (2014)., Introduction to high-dimensional statistics 138. CRC Press.
  • [9] Lecué, G. and Mendelson, S. (2017). Regularization and the small-ball method I: sparse recovery., Annals of Statistics, to appear.
  • [10] Owen, A. B. (2007). A robust hybrid of lasso and ridge regression., Contemporary Mathematics 443 59–72.
  • [11] Stucky, B. and van de Geer, S. (2017). Sharp Oracle inequalities for square root regularization., Journal of Machine Learning Research 18 1–29.
  • [12] Su, W. and Candes, E. (2016). SLOPE is adaptive to unknown sparsity and asymptotically minimax., Annals of Statistics 44 1038–1068.
  • [13] Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression., Biometrika 1–20.
  • [14] Zeng, X. and Figueiredo, M. A. T. (2014). The Ordered Weighted $\ell _1$ Norm: Atomic Formulation, Projections, and Algorithms., ArXiv preprint, arXiv:1409.4271.