Brazilian Journal of Probability and Statistics

Group selection in high-dimensional partially linear additive models

Fengrong Wei

Full-text: Open access

Abstract

We consider the problem of simultaneous variable selection and estimation in partially linear additive models with a large number of grouped variables in the linear part and a large number of nonparametric components. In our problem, the number of grouped variables may be larger than the sample size, but the number of important groups is “small” relative to the sample size. We apply the adaptive group Lasso to select the important groups, using spline bases to approximate the nonparametric components and the group Lasso to obtain an initial consistent estimator. Under appropriate conditions, it is shown that, the group Lasso selects the number of groups which is comparable with the underlying important groups and is estimation consistent, the adaptive group Lasso selects the correct important groups with probability converging to one as the sample size increases and is selection consistent. The results of simulation studies show that the adaptive group Lasso procedure works well with samples of moderate size. A real example is used to illustrate the application of the proposed penalized method.

Article information

Source
Braz. J. Probab. Stat., Volume 26, Number 3 (2012), 219-243.

Dates
First available in Project Euclid: 5 April 2012

Permanent link to this document
https://projecteuclid.org/euclid.bjps/1333632162

Digital Object Identifier
doi:10.1214/10-BJPS129

Mathematical Reviews number (MathSciNet)
MR2911703

Zentralblatt MATH identifier
1239.62048

Keywords
Adaptive group Lasso group selection high-dimensional data selection consistency semiparametric regression

Citation

Wei, Fengrong. Group selection in high-dimensional partially linear additive models. Braz. J. Probab. Stat. 26 (2012), no. 3, 219--243. doi:10.1214/10-BJPS129. https://projecteuclid.org/euclid.bjps/1333632162


Export citation

References

  • Bach, F. R. (2008). Consistency of the group lasso and multiple kernel learning. Journal of Machine Learning Research 9, 1179–1225.
  • Chen, H. (1988). Convergence rates for parametric components in partly linear model. The Annals of Statistics 16, 136–146.
  • Chen, J. H. and Chen, Z. H. (2008). Extended BIC for small-n-large-p sparse GLM. Technical Report 241, Dept. Statistics, Univ. British Columbia.
  • Chiang, A. P., Beck, J. S., Yen, H.-J., Tayeh, M. K., Scheetz, T. E., Swiderski, R., Nishimura, D., Braun, T. A., Kim, K.-Y., Huang, J., Elbedour, K., Carmi, R., Slusarski, D. C., Casavant, T. L., Stone, E. M. and Sheffield, V. C. (2006). Homozygosity mapping with SNP arrays identies a novel gene for Bardet–Biedl syndrome (BBS10). Proceedings of the National Academy of Sciences of the USA, 103, 6287–6292.
  • Engle, R. F., Granger, C. W., Rice, J. and Weiss, A. (1986). Semiparametric estimatea of the relation between weather and electricity sales. Journal of the American Statistical Association 81, 310–320.
  • Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96, 1348–1360.
  • Fan, J. and Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics 32, 928–961.
  • Greenshtein, E. and Ritov, Y. (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli 10, 971–988.
  • Hastie, T. and Tibshirani, R. (1986). Generalized additive model. Statistical Science 1, 297–318.
  • Hastie, T. and Tibshirani, R. (1990). Generalized Additive Model. London: Chapman & Hall/CRC.
  • Heckman, N. E. (1986). Spline smoothing in a partly linear model. Journal of the Royal Statistical Society, Ser. B 48, 224–248.
  • Irizarry, R. A., Hobbs, B., Collin, F., Beazer-Barclay, Y. D., Antonellis, K. J., Scherf, U. and Speed, T. P. (2003). Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics 4, 249–264.
  • Knight, K. and Fu, W. J. (2000). Asymptotics for lasso-type estimators. The Annals of Statistics 28, 1356–1378.
  • Meinshausen, N. and Buhlmann, P. (2006). High dimensional graphs and variable selection with the Lasso. The Annals of Statistics 34, 1436–1462.
  • Robinson, P. M. (1988). Root-n-consistent semiparametric regression. Econometrica 56, 931–954.
  • Scheetz, T. E., Kim, K.-Y. A., Swiderski, R. E., Philpl, A. R., Braun, T. A., Knudtson, K. L., Dorrance, A. M., DiBona, G. F., Huang, J., Casavant, T. L., Sheild, V. C. and Stone, E. M. (2006). Regulation of gene expression in the mammalian eye and its relevance to eye disease. Proceedings of the National Academy of Sciences of the USA 103, 14429–14434.
  • Schumaker, L. (1981). Spline Functions: Basic Theory. New York: Wiley.
  • Speckman, P. (1988). Kernel smoothing in partial linear models. Journal of the Royal Statistical Society, Ser. B 50, 413–436.
  • Stone, C. J. (1985). Additive regression and other nonparametric models. The Annals of Statistics 13, 689–705.
  • Stone, C. J. (1986). The dimensionality reduction principle for generalized additive models. The Annals of Statistics 14, 590–606.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Ser. B 58, 267–288.
  • van de Geer, S. (2008). High-dimensional generalized linear models and the Lasso. The Annals of Statistics 36, 614–645.
  • Wahba, G. (1984). Partial spline models for the semiparametric estimation of functions of several variables. In Analyses for Time Series, Japan–US Joint Seminar 319–329. Tokyo: Institue of Statistical Mathematics.
  • Wainwright, M. (2006). Sharp thresholds for high-dimensional and noisy recovery of sparsity. Technical report, Dept. Statistics, UC Berkeley.
  • Wei, F. and Huang, J. (2010). Consistent group selection in high-dimensional linear regression. Bernoulli 16, 1369–1384.
  • Xie, H.-L. and Huang, J. (2009). SCAD-penalized regression in high-dimensional partially linear models. The Annals of Statistics 37, 673–696.
  • Zhang, C.-H. (2007). Penalized linear unbiased selection. Technical Report 2007-003, Dept. Statistics, Rutgers Univ.
  • Zhang, C.-H. and Huang, J. (2008). Model-selection consistency of the LASSO in high-dimensional linear regression. The Annals of Statistics 36, 1567–1594.
  • Zhao, P. and Yu, B. (2006). On model selection consistency of LASSO. Journal of Machine Learning Research 7, 2541–2563.
  • Zhou, S., Shen, X. and Wolfe, D. A. (1998). Local asymptotics for regression splines and confidence regions. The Annals of Statistics 26, 1760–1782.
  • Zou, H. (2006). The adaptive Lasso and its oracle properties. Journal of the American Statistical Association 101, 1418–1429.
  • Zou, H. and Hastie, T. (2006). Regularization and variable selection via the elastic net. The Annals of Statistics 67, 301–320.