Open Access
December 2004 Persistence in high-dimensional linear predictor selection and the virtue of overparametrization
Eitan Greenshtein, Ya'Acov Ritov
Author Affiliations +
Bernoulli 10(6): 971-988 (December 2004). DOI: 10.3150/bj/1106314846

Abstract

Let Z i =(Y i,X 1 i,..., X m i), i=1,...,n , be independent and identically distributed random vectors, Z i F,Fcal F . It is desired to predict Y by β jX j , where ( β 1,...,β m)B nR m , under a prediction loss. Suppose that m =n α, α >1 , that is, there are many more explanatory variables than observations. We consider sets Bn restricted by the maximal number of non-zero coefficients of their members, or by their l1 radius. We study the following asymptotic question: how 'large' may the set Bn be, so that it is still possible to select empirically a predictor whose risk under F is close to that of the best predictor in the set? Sharp bounds for orders of magnitudes are given under various assumptions on cal F . Algorithmic complexity of the ensuing procedures is also studied. The main message of this paper and the implications of the orders derived are that under various sparsity assumptions on the optimal predictor there is 'asymptotically no harm' in introducing many more explanatory variables than observations. Furthermore, such practice can be beneficial in comparison with a procedure that screens in advance a small subset of explanatory variables. Another main result is that 'lasso' procedures, that is, optimization under l1 constraints, could be efficient in finding optimal sparse predictors in high dimensions.

Citation

Download Citation

Eitan Greenshtein. Ya'Acov Ritov. "Persistence in high-dimensional linear predictor selection and the virtue of overparametrization." Bernoulli 10 (6) 971 - 988, December 2004. https://doi.org/10.3150/bj/1106314846

Information

Published: December 2004
First available in Project Euclid: 21 January 2005

zbMATH: 1055.62078
MathSciNet: MR2108039
Digital Object Identifier: 10.3150/bj/1106314846

Keywords: consistency , Lasso , regression , Variable selection

Rights: Copyright © 2004 Bernoulli Society for Mathematical Statistics and Probability

Vol.10 • No. 6 • December 2004
Back to Top