The Annals of Statistics

The composite absolute penalties family for grouped and hierarchical variable selection

Peng Zhao, Guilherme Rocha, and Bin Yu

Full-text: Open access

Abstract

Extracting useful information from high-dimensional data is an important focus of today’s statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L1-penalized squared error minimization method Lasso has been popular in regression models and beyond.

In this paper, we combine different norms including L1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the L1 and L norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without cross-validation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with pn and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments.

Article information

Source
Ann. Statist. Volume 37, Number 6A (2009), 3468-3497.

Dates
First available in Project Euclid: 17 August 2009

Permanent link to this document
http://projecteuclid.org/euclid.aos/1250515393

Digital Object Identifier
doi:10.1214/07-AOS584

Mathematical Reviews number (MathSciNet)
MR2549566

Zentralblatt MATH identifier
05644286

Subjects
Primary: 62J07: Ridge regression; shrinkage estimators

Keywords
Linear regression penalized regression variable selection coefficient paths grouped selection hierarchical models

Citation

Zhao, Peng; Rocha, Guilherme; Yu, Bin. The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics 37 (2009), no. 6A, 3468--3497. doi:10.1214/07-AOS584. http://projecteuclid.org/euclid.aos/1250515393.


Export citation

References

  • [1] Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Proc. 2nd International Symposium on Information Theory 267–281.
  • [2] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge Univ. Press, Cambridge.
  • [3] Breiman, L. (1995). Better subset regression using the nonnegative garrote. Technometrics 37 373–384.
  • [4] Chen, S., Donoho, D. and Saunders, M. (2001). Atomic decomposition by basis pursuit. SIAM Rev. 43 129–159.
  • [5] Donoho, D. and Johnstone, I. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81 425–455.
  • [6] Efron, B. (1982). The Jackknife, the Bootstrap and Other Resampling Plans. SIAM, Philadelphia.
  • [7] Efron, B. (2004). The estimation of prediction error covariance penalties and cross-validation. J. Amer. Statist. Assoc. 99 619–632.
  • [8] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. Ann. Statist. 35 407–499.
  • [9] Frank, I. E. and Friedman, J. (1993). A statistical view of some chemometrics regression tools. Technometrics 35 109–148.
  • [10] Freund, Y. and Schapire, R. E. (1997). A decision theoretic generalization of online learning and an application to boosting. J. Comput. System Sci. 55 119–139.
  • [11] Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J. P., Coller, H., Loh, M. L., Downing, J. R., Caligiuri, M. A., Bloomfield, C. D. and Lander, E. S. (1999). Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286 531–537.
  • [12] Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: Biased estimation of nonorthogonal problems. Technometrics 12 55–67.
  • [13] Kaufman, L. and Rousseeuw, P. J. (1990). Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
  • [14] Kim, Y., Kim, J. and Kim, Y. (2006). Blockwise sparse regression. Statist. Sinica 16 375–390.
  • [15] Mallows, C. L. (1973). Some comments on cp. Technometrics 15 661–675.
  • [16] Obozinski, G. and Jordan, M. (2009). Multi-task feature selection. J. Stat. Comput. To appear.
  • [17] Osborne, M., Presnell, B. and Turlach, B. (2000). A new approach to variable selection in least square problems. IMA J. Numer. Anal. 20 389–404.
  • [18] Rosset, S. and Zhu, J. (2007). Piecewise linear regularized solution paths. Ann. Statist. 35 1012–1030.
  • [19] Schwartz, G. (1978). Estimating the dimension of a model. Ann. Statist. 6 461–464.
  • [20] Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. J. Roy. Statist. Soc. Ser. B Methodol. 36 111–147.
  • [21] Sugiura, N. (1978). Further analysis of the data by Akaike’s information criterion and finite corrections. Comm. Statist. A7 13–26.
  • [22] Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
  • [23] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. J. Roy. Statist. Soc. Ser. B 68 49–67.
  • [24] Zhao, P. and Yu, B. (2007). Stagewise Lasso. J. Mach. Learn. Res. 8 2701–2726.
  • [25] Zhao, P., Rocha, G. and Yu, B. (2006). Grouped and hierarchical model selection through composite absolute penalties. Technical Report 703, Dept. Statistics, UC Berkeley.
  • [26] Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. Roy. Statist. Soc. Ser. B 67 301–320.
  • [27] Zou, H., Hastie, T. and Tibshirani, R. (2007). On the “degrees of freedom” of the Lasso. Ann. Statist. 35 2173–2192.