Open Access
October 2006 Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint
Eitan Greenshtein
Ann. Statist. 34(5): 2367-2386 (October 2006). DOI: 10.1214/009053606000000768

Abstract

Let (Y, X1, …, Xm) be a random vector. It is desired to predict Y based on (X1, …, Xm). Examples of prediction methods are regression, classification using logistic regression or separating hyperplanes, and so on.

We consider the problem of best subset selection, and study it in the context m=nα, α>1, where n is the number of observations. We investigate procedures that are based on empirical risk minimization. It is shown, that in common cases, we should aim to find the best subset among those of size which is of order o(n / log(n)). It is also shown, that in some “asymptotic sense,” when assuming a certain sparsity condition, there is no loss in letting m be much larger than n, for example, m=nα, α>1. This is in comparison to starting with the “best” subset of size smaller than n and regardless of the value of α.

We then study conditions under which empirical risk minimization subject to l1 constraint yields nearly the best subset. These results extend some recent results obtained by Greenshtein and Ritov.

Finally we present a high-dimensional simulation study of a “boosting type” classification procedure.

Citation

Download Citation

Eitan Greenshtein. "Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint." Ann. Statist. 34 (5) 2367 - 2386, October 2006. https://doi.org/10.1214/009053606000000768

Information

Published: October 2006
First available in Project Euclid: 23 January 2007

zbMATH: 1106.62022
MathSciNet: MR2291503
Digital Object Identifier: 10.1214/009053606000000768

Subjects:
Primary: 62C99

Keywords: Persistence , Variable selection

Rights: Copyright © 2006 Institute of Mathematical Statistics

Vol.34 • No. 5 • October 2006
Back to Top