Electronic Journal of Statistics

The lasso problem and uniqueness

Ryan J. Tibshirani

Full-text: Open access

Abstract

The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables $p$ exceeds the number of observations $n$. But when $p>n$, the lasso criterion is not strictly convex, and hence it may not have a unique minimizer. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the predictor variables are drawn from a continuous probability distribution, then there is a unique lasso solution with probability one, regardless of the sizes of $n$ and $p$. We also show that this result extends easily to $\ell_{1}$ penalized minimization problems over a wide range of loss functions.

A second important question is: how can we manage the case of non-uniqueness in lasso solutions? In light of the aforementioned result, this case really only arises when some of the predictor variables are discrete, or when some post-processing has been performed on continuous predictor measurements. Though we certainly cannot claim to provide a complete answer to such a broad question, we do present progress towards understanding some aspects of non-uniqueness. First, we extend the LARS algorithm for computing the lasso solution path to cover the non-unique case, so that this path algorithm works for any predictor matrix. Next, we derive a simple method for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming. Finally, we review results from the literature on some of the unifying properties of lasso solutions, and also point out particular forms of solutions that have distinctive properties.

Article information

Source
Electron. J. Statist., Volume 7 (2013), 1456-1490.

Dates
First available in Project Euclid: 21 May 2013

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1369148600

Digital Object Identifier
doi:10.1214/13-EJS815

Mathematical Reviews number (MathSciNet)
MR3066375

Zentralblatt MATH identifier
1337.62173

Subjects
Primary: 62J07: Ridge regression; shrinkage estimators
Secondary: 90C46: Optimality conditions, duality [See also 49N15]

Keywords
Lasso high-dimensional uniqueness LARS

Citation

Tibshirani, Ryan J. The lasso problem and uniqueness. Electron. J. Statist. 7 (2013), 1456--1490. doi:10.1214/13-EJS815. https://projecteuclid.org/euclid.ejs/1369148600


Export citation

References

  • Bickel, P., Ritov, Y. and Tsybakov, A. (2009), ‘Simultaneous analysis of lasso and Dantzig selector’, Annals of Statistics 37(4), 1705–1732.
  • Candes, E. J. and Plan, Y. (2009), ‘Near ideal model selection by $\ell_1$ minimization’, Annals of Statistics 37(5), 2145–2177.
  • Chen, S., Donoho, D. L. and Saunders, M. (1998), ‘Atomic decomposition for basis pursuit’, SIAM Journal on Scientific Computing 20(1), 33–61.
  • Donoho, D. L. (2006), ‘For most large underdetermined systems of linear equations, the minimal $\ell_1$ solution is also the sparsest solution’, Communications on Pure and Applied Mathematics 59(6), 797–829.
  • Dossal, C. (2012), ‘A necessary and sufficient condition for exact sparse recovery by $\ell_1$ minimization’, Comptes Rendus Mathematique 350(1–2), 117–120.
  • Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004), ‘Least angle regression’, Annals of Statistics 32(2), 407–499.
  • Fuchs, J. J. (2005), ‘Recovery of exact sparse representations in the presense of bounded noise’, IEEE Transactions on Information Theory 51(10), 3601–3608.
  • Fukuda, K., Liebling, T. M. and Margot, F. (1997), ‘Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron’, Computational Geometry: Theory and Applications 8(1), 1–12.
  • Koltchinskii, V., (2009a), ‘The Dantzig selector and sparsity oracle inequalities’, Bernoulli 15(3), 799–828.
  • Koltchinskii, V., (2009b), ‘Sparsity in penalized empirical risk minimization’, Annales de l’Institut Henri Poincare, Probabilites et Statistiques 45(1), 7–57.
  • Mairal, J. and Yu, B. (2012), ‘Complexity analysis of the lasso regularization path’, Proceedings of the International Conference on Machine Learning 29.
  • Negahban, S., Ravikumar, P., Wainwright, M. J. and Yu, B. (2012), A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers. To appear in Statistical, Science.
  • Osborne, M., Presnell, B. and Turlach, B. (2000, a), ‘A new approach to variable selection in least squares problems’, IMA Journal of Numerical Analysis 20(3), 389–404.
  • Osborne, M., Presnell, B. and Turlach, B. (2000, b), ‘On the lasso and its dual’, Journal of Computational and Graphical Statistics 9(2), 319–337.
  • Rockafellar, R. T. (1970), Convex Analysis, Princeton University Press, Princeton.
  • Rosset, S., Zhu, J. and Hastie, T. (2004), ‘Boosting as a regularized path to a maximum margin classifier’, Journal of Machine Learning Research 5, 941–973.
  • Tibshirani, R. (1996), ‘Regression shrinkage and selection via the lasso’, Journal of the Royal Statistical Society: Series B 58(1), 267–288.
  • Tibshirani, R. J. (2011), The Solution Path of the Generalized Lasso, PhD thesis, Department of Statistics, Stanford University.,
  • Tibshirani, R. J. and Taylor, J. (2011), Proofs and technical details for “The solution path of the generalized lasso”., http://www.stat.cmu.edu/~ryantibs/papers/genlasso-supp.pdf
  • Tibshirani, R. J. and Taylor, J. (2012), ‘Degrees of freedom in lasso problems’, Annals of Statistics 40(2), 1198–1232.
  • van de Geer, S. and Buhlmann, P. (2009), ‘On the conditions used to prove oracle results for the lasso’, Electronic Journal of Statistics 3, 1360–1392.
  • Wainwright, M. J. (2009), ‘Sharp thresholds for high-dimensional and noisy sparsity recovery using $\ell_1$-constrained quadratic programming (lasso)’, IEEE Transactions on Information Theory 55(5), 2183–2202.
  • Zhao, P. and Yu, B. (2006), ‘On model selection consistency of lasso’, Journal of Machine Learning Research 7, 2541–2564.
  • Zou, H. and Hastie, T. (2005), ‘Regularization and variable selection via the elastic net’, Journal of the Royal Statistical Society: Series B 67(2), 301–320.