The Annals of Statistics

Near-ideal model selection by 1 minimization

Emmanuel J. Candès and Yaniv Plan

Full-text: Open access


We consider the fundamental problem of estimating the mean of a vector y=+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term—the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate using a computationally tractable algorithm.

We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where is a sparse or nearly sparse superposition of variables, but not in all cases.

Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear.

Article information

Ann. Statist. Volume 37, Number 5A (2009), 2145-2177.

First available in Project Euclid: 15 July 2009

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Primary: 62C05: General considerations 62G05: Estimation
Secondary: 94A08: Image processing (compression, reconstruction, etc.) [See also 68U10] 94A12: Signal theory (characterization, reconstruction, filtering, etc.)

Model selection oracle inequalities the lasso compressed sensing incoherence eigenvalues of random matrices


Candès, Emmanuel J.; Plan, Yaniv. Near-ideal model selection by ℓ 1 minimization. Ann. Statist. 37 (2009), no. 5A, 2145--2177. doi:10.1214/08-AOS653.

Export citation


  • [1] Akaike, H. (1974). A new look at the statistical model identification. System identification and time-series analysis. IEEE Trans. Automat. Control AC-19 716–723.
  • [2] Barron, A., Birgé, L. and Massart, P. (1999). Risk bounds for model selection via penalization. Probab. Theory Related Fields 113 301–413.
  • [3] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Laso and Dantzig selector. Ann. Statist. 37 1705–1732.
  • [4] Birgé, L. and Massart, P. (2001). Gaussian model selection. J. Eur. Math. Soc. (JEMS) 3 203–268.
  • [5] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2007). Aggregation for Gaussian regression. Ann. Statist. 35 1674–1697.
  • [6] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2007). Sparsity oracle inequalities for the Lasso. Electron. J. Stat. 1 169–194 (electronic).
  • [7] Candès, E. J. and Donoho, D. L. (2000). Curvelets—a surprisingly effective nonadaptive representation for objects with edges. In Curves and Surfaces (C. Rabut, A. Cohen and L. L. Schumaker, eds.) 105–120. Vanderbilt Univ. Press, Nashville, TN.
  • [8] Candès, E. J. and Donoho, D. L. (2008). New tight frames of curvelets and optimal representations of objects with piecewise-C2 singularities. Comm. Pure Appl. Math. 57 219–266.
  • [9] Candès, E. J. and Romberg, J. (2006). Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6 227–254.
  • [10] Candès, E. J. and Tao, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. Technical report, California Institute of Technology. Ann. Statist. 35 2313–2351.
  • [11] Chen, S., Donoho, D. and Saunders, M. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33–61.
  • [12] Cheng, B. and Titterington, D. M. (1994). Neural networks: A review from a statistical perspective (with comments and a rejoinder by the authors). Statist. Sci. 9 2–54.
  • [13] Donoho, D. L., Elad, M. and Temlyakov, V. N. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory 52 6–18.
  • [14] Foster, D. P. and George, E. I. (1994). The risk inflation criterion for multiple regression. Ann. Statist. 22 1947–1975.
  • [15] Greenshtein, E. (2006). Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint. Ann. Statist. 34 2367–2386.
  • [16] Greenshtein, E. and Ritov, Y. (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli 10 971–988.
  • [17] Huang, J., Ma, S. and Zhang, C.-H. (2006). Adaptive lasso for sparse high-dimensional regression models. Technical report, Univ. Iowa.
  • [18] Mallat, S. (1999). A Wavelet Tour of Signal Processing, 2nd ed. Academic Press, San Diego, CA.
  • [19] Mallows, C. L. (1973). Some comments on cp. Technometrics 15 661–676.
  • [20] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
  • [21] Meinshausen, N. and Yu, B. (2006). Lasso type recovery of sparse representations for high dimensional data. Technical report, Univ. California.
  • [22] Natarajan, B. K. (1995). Sparse approximate solutions to linear systems. SIAM J. Comput. 24 227–234.
  • [23] Santosa, F. and Symes, W. W. (1986). Linear inversion of band-limited reflection seismograms. SIAM J. Sci. Statist. Comput. 7 1307–1330.
  • [24] Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist. 6 461–464.
  • [25] Tao, T. Personal communication.
  • [26] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
  • [27] Tropp, J. A. (2008). Norms of random submatrices and sparse approximation. C. R. Math. Acad. Sci. Paris 346 1271–1274.
  • [28] Wainwright, M. J. (2006). Sharp thresholds for high-dimensional and noisy recovery of sparsity.
  • [29] Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541–2563.
  • [30] Zhang, T. (2009). Some sharp performance bounds for least squares regression with L1 regularization. Rutgers Univ.