Open Access
2018 Improved bounds for Square-Root Lasso and Square-Root Slope
Alexis Derumigny
Electron. J. Statist. 12(1): 741-766 (2018). DOI: 10.1214/18-EJS1410

Abstract

Extending the results of Bellec, Lecué and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is $(s/n)\log\left (p/s\right )$, up to some constant, under some mild conditions on the design matrix. Here, $n$ is the sample size, $p$ is the dimension and $s$ is the sparsity parameter. We also prove optimality for the estimation error in the $l_{q}$-norm, with $q\in[1,2]$ for the Square-Root Lasso, and in the $l_{2}$ and sorted $l_{1}$ norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity $s$ of the true parameter. Next, we prove that any estimator depending on $s$ which attains the minimax rate admits an adaptive to $s$ version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.

Citation

Download Citation

Alexis Derumigny. "Improved bounds for Square-Root Lasso and Square-Root Slope." Electron. J. Statist. 12 (1) 741 - 766, 2018. https://doi.org/10.1214/18-EJS1410

Information

Received: 1 March 2017; Published: 2018
First available in Project Euclid: 27 February 2018

zbMATH: 06864475
MathSciNet: MR3769194
Digital Object Identifier: 10.1214/18-EJS1410

Subjects:
Primary: 62G08
Secondary: 62C20 , 62G05

Keywords: Adaptivity , High-dimensional statistics , Minimax rates , sparse linear regression , square-root estimators

Vol.12 • No. 1 • 2018
Back to Top