Abstract
In this paper, we derive nonasymptotic error bounds for the Lasso estimator when the penalty parameter for the estimator is chosen using K-fold cross-validation. Our bounds imply that the cross-validated Lasso estimator has nearly optimal rates of convergence in the prediction, , and norms. For example, we show that in the model with the Gaussian noise and under fairly general assumptions on the candidate set of values of the penalty parameter, the estimation error of the cross-validated Lasso estimator converges to zero in the prediction norm with the rate, where n is the sample size of available data, p is the number of covariates and s is the number of nonzero coefficients in the model. Thus, the cross-validated Lasso estimator achieves the fastest possible rate of convergence in the prediction norm up to a small logarithmic factor , and similar conclusions apply for the convergence rate both in and in norms. Importantly, our results cover the case when p is (potentially much) larger than n and also allow for the case of non-Gaussian noise. Our paper therefore serves as a justification for the widely spread practice of using cross-validation as a method to choose the penalty parameter for the Lasso estimator.
Funding Statement
Chetverikov’s work was partially funded by NSF Grant SES–1628889. Liao’s work was partially funded by NSF Grant SES–1628889.
Acknowledgments
We thank Mehmet Caner, Matias Cattaneo, Yanqin Fan, Sara van de Geer, Jerry Hausman, James Heckman, Roger Koenker, Andzhey Koziuk, Miles Lopes, Jinchi Lv, Rosa Matzkin, Anna Mikusheva, Whitney Newey, Jesper Sorensen, Vladimir Spokoiny, Larry Wasserman and seminar participants in many places for helpful comments.
Citation
Denis Chetverikov. Zhipeng Liao. Victor Chernozhukov. "On cross-validated Lasso in high dimensions." Ann. Statist. 49 (3) 1300 - 1317, June 2021. https://doi.org/10.1214/20-AOS2000
Information