The Annals of Statistics

Discussion: A tale of three cousins: Lasso, L2Boosting and Dantzig

N. Meinshausen, G. Rocha, and B. Yu

Full-text: Open access

Article information

Source
Ann. Statist., Volume 35, Number 6 (2007), 2373-2384.

Dates
First available in Project Euclid: 22 January 2008

Permanent link to this document
https://projecteuclid.org/euclid.aos/1201012963

Digital Object Identifier
doi:10.1214/009053607000000460

Mathematical Reviews number (MathSciNet)
MR2382649

Citation

Meinshausen, N.; Rocha, G.; Yu, B. Discussion: A tale of three cousins: Lasso, L2Boosting and Dantzig. Ann. Statist. 35 (2007), no. 6, 2373--2384. doi:10.1214/009053607000000460. https://projecteuclid.org/euclid.aos/1201012963


Export citation

References

  • Bertsekas, D. (1995). Nonlinear Programming. Athena Scientific, Belmont, MA.
  • Bühlmann, P. (2006). Boosting for high-dimensional linear models. Ann. Statist. 34 559–583.
  • Bunea, F., Tsybakov, A. and Wegkamp, M. (2006). Sparsity oracle inequalities for the lasso. Technical report.
  • Candès, E. (2007). $l_1$-magic. Available at www.l1-magic.org.
  • Chen, S., Donoho, D. and Saunders, M. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33–61.
  • Donoho, D. L. (2006). For most large underdetermined systems of equations the minimal $\ell_1$-norm near-solution approximates the sparsest near-solution. Comm. Pure Appl. Math. 59 907–934.
  • Donoho, D. L., Elad, M. and Temlyakov, V. N. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory 52 6–18.
  • Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression (with discussion). Ann. Statist. 32 407–499.
  • Greenshtein, E. and Ritov, Y. (2004). Persistence in high-dimensional predictor selection and the virtue of overparametrization. Bernoulli 10 971–988.
  • Knight, K. and Fu, W. (2000). Asymptotics for lasso-type estimators. Ann. Statist. 28 1356–1378.
  • Leng, C., Lin, Y. and Wahba, G. (2006). A note on the lasso and related procedures in model selection. Statist. Sinica 16 1273–1284.
  • Li, Y. and Zhu, J. (2006). The $l_1$-norm quantile regression. Technical report, Dept. Statistics, Univ. Michigan.
  • Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
  • Meinshausen, N. and Yu, B. (2007). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist. To appear.
  • Osborne, M., Presnell, B. and Turlach, B. (2000). On the LASSO and its dual. J. Comput. Graph. Statist. 9 319–337.
  • Rosset, S. and Zhu, J. (2007). Piecewise linear regularized solution paths. Ann. Statist. 35 1012–1030.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
  • Tropp, J. (2006). Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Trans. Inform. Theory 52 1030–1051.
  • van de Geer, S. (2006). High-dimensional generalized linear models and the lasso. Technical Report 133, ETH Zürich.
  • Wainwright, M. (2006). Sharp thresholds for high-dimensional and noisy recovery of sparsity. Available at arxiv.org/abs/math/0605740.
  • Wainwright, M. (2007). Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. Technical Report 725, Dept. Statistics, Univ. California, Berkeley.
  • Zhang, C.-H. and Huang, J. (2006). Model-selection consistency of the lasso in high-dimensional linear regression. Technical Report 003, Dept. Statistics, Rutgers Univ.
  • Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541–2563.
  • Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418–1429.