Source: Ann. Statist.
Volume 40, Number 2
We derive the degrees of freedom of the lasso fit, placing no assumptions on the predictor matrix $X$. Like the well-known result of Zou, Hastie and Tibshirani [Ann. Statist. 35 (2007) 2173–2192], which gives the degrees of freedom of the lasso fit when $X$ has full column rank, we express our result in terms of the active set of a lasso solution. We extend this result to cover the degrees of freedom of the generalized lasso fit for an arbitrary predictor matrix $X$ (and an arbitrary penalty matrix $D$). Though our focus is degrees of freedom, we establish some intermediate results on the lasso and generalized lasso that may be interesting on their own.
Full-text: Access denied (no subscription
We're sorry, but we are unable to provide
you with the full text of this article because we are not able to identify
you as a subscriber.
If you have a personal subscription to
this journal, then please login. If you are already logged in, then you
may need to update your profile to register your subscription. Read more about accessing full-text
Chen, S. S., Donoho, D. L. and Saunders, M. A. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33–61.
Dossal, C., Kachour, M., Fadili, J., Peyre, G. and Chesneau, C. (2011). The degrees of freedom of the lasso for general design matrix. Available at arXiv:1111.1162
Efron, B. (1986). How biased is the apparent error rate of a prediction rule? J. Amer. Statist. Assoc. 81 461–470.
Mathematical Reviews (MathSciNet): MR845884
Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression (with discussion, and a rejoinder by the authors). Ann. Statist. 32 407–499.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360.
Grünbaum, B. (2003). Convex Polytopes, 2nd ed. Graduate Texts in Mathematics 221. Springer, New York.
Hastie, T. J. and Tibshirani, R. J. (1990). Generalized Additive Models. Monographs on Statistics and Applied Probability 43. Chapman & Hall, London.
Loubes, J. M. and Massart, P. (2004). Dicussion to “Least angle regression.” Ann. Statist. 32 460–465.
Mallows, C. (1973). Some comments on $C_p$. Technometrics 15 661–675.
Meyer, M. and Woodroofe, M. (2000). On the degrees of freedom in shape-restricted regression. Ann. Statist. 28 1083–1104.
Osborne, M. R., Presnell, B. and Turlach, B. A. (2000). On the LASSO and its dual. J. Comput. Graph. Statist. 9 319–337.
Rosset, S., Zhu, J. and Hastie, T. (2004). Boosting as a regularized path to a maximum margin classifier. J. Mach. Learn. Res. 5 941–973.
Schneider, R. (1993). Convex Bodies: The Brunn–Minkowski Theory. Encyclopedia of Mathematics and Its Applications 44. Cambridge Univ. Press, Cambridge.
Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. Ann. Statist. 9 1135–1151.
Mathematical Reviews (MathSciNet): MR630098
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
Tibshirani, R. J. (2011). The solution path of the generalized lasso, Ph.D. thesis, Dept. Statistics, Stanford Univ.
Tibshirani, R. J. and Taylor, J. (2011). The solution path of the generalized lasso. Ann. Statist. 39 1335–1371.
Vaiter, S., Peyre, G., Dossal, C. and Fadili, J. (2011). Robust sparse analysis regularization. Available at arXiv:1109.6222
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 301–320.
Zou, H., Hastie, T. and Tibshirani, R. (2007). On the “degrees of freedom” of the lasso. Ann. Statist. 35 2173–2192.