The Annals of Statistics

Covariate assisted screening and estimation

Zheng Tracy Ke, Jiashun Jin, and Jianqing Fan

Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Consider a linear model $Y=X\beta+z$, where $X=X_{n,p}$ and $z\sim N(0,I_{n})$. The vector $\beta$ is unknown but is sparse in the sense that most of its coordinates are $0$. The main interest is to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao [Nonlinear Time Series: Nonparametric and Parametric Methods (2003) Springer]) and the change-point problem (Bhattacharya [In Change-Point Problems (South Hadley, MA, 1992) (1994) 28–56 IMS]), we are primarily interested in the case where the Gram matrix $G=X'X$ is nonsparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible.

We approach this problem by a new procedure called the covariate assisted screening and estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage screen and clean [Fan and Song Ann. Statist. 38 (2010) 3567–3604; Wasserman and Roeder Ann. Statist. 37 (2009) 2178–2201] procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives.

For any procedure $\hat{\beta}$ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of $\hat{\beta}$ and $\beta$. We show that in a broad class of situations where the Gram matrix is nonsparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

Article information

Source
Ann. Statist. Volume 42, Number 6 (2014), 2202-2242.

Dates
First available in Project Euclid: 20 October 2014

Permanent link to this document
https://projecteuclid.org/euclid.aos/1413810726

Digital Object Identifier
doi:10.1214/14-AOS1243

Mathematical Reviews number (MathSciNet)
MR3269978

Zentralblatt MATH identifier
1310.62085

Subjects
Primary: 62J05: Linear regression 62J07: Ridge regression; shrinkage estimators
Secondary: 62C20: Minimax procedures 62F12: Asymptotic properties of estimators

Keywords
Asymptotic minimaxity graph of least favorables (GOLF) graph of strong dependence (GOSD) Hamming distance multivariate screening phase diagram rare and weak signal model sparsity variable selection

Citation

Ke, Zheng Tracy; Jin, Jiashun; Fan, Jianqing. Covariate assisted screening and estimation. Ann. Statist. 42 (2014), no. 6, 2202--2242. doi:10.1214/14-AOS1243. https://projecteuclid.org/euclid.aos/1413810726.


Export citation

References

  • Andreou, E. and Ghysels, E. (2002). Detecting multiple breaks in financial market volatility dynamics. J. Appl. Econometrics 17 579–600.
  • Bhattacharya, P. K. (1994). Some aspects of change-point analysis. In Change-Point Problems (South Hadley, MA, 1992). Institute of Mathematical Statistics Lecture Notes—Monograph Series 23 28–56. IMS, Hayward, CA.
  • Candès, E. J. and Plan, Y. (2009). Near-ideal model selection by $\ell_1$ minimization. Ann. Statist. 37 2145–2177.
  • Chen, W. W., Hurvich, C. M. and Lu, Y. (2006). On the correlation matrix of the discrete Fourier transform and the fast solution of large Toeplitz systems for long-memory time series. J. Amer. Statist. Assoc. 101 812–822.
  • Donoho, D. L. and Huo, X. (2001). Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory 47 2845–2862.
  • Donoho, D. and Jin, J. (2008). Higher criticism thresholding: Optimal feature selection when useful features are rare and weak. Proc. Natl. Acad. Sci. USA 105 14790–14795.
  • Donoho, D. L. and Stark, P. B. (1989). Uncertainty principles and signal recovery. SIAM J. Appl. Math. 49 906–931.
  • Fan, J., Guo, S. and Hao, N. (2012). Variance estimation using refitted cross-validation in ultrahigh dimensional regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 74 37–65.
  • Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360.
  • Fan, J. and Song, R. (2010). Sure independence screening in generalized linear models with NP-dimensionality. Ann. Statist. 38 3567–3604.
  • Fan, J., Xue, L. and Zou, H. (2014). Strong oracle optimality of folded concave penalized estimation. Ann. Statist. 42 819–849.
  • Fan, J. and Yao, Q. (2003). Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York.
  • Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432–441.
  • Genovese, C. R., Jin, J., Wasserman, L. and Yao, Z. (2012). A comparison of the lasso and marginal regression. J. Mach. Learn. Res. 13 2107–2143.
  • Harchaoui, Z. and Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. J. Amer. Statist. Assoc. 105 1480–1493.
  • Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine 2 e124.
  • Ising, E. (1925). A contribution to the theory of ferromagnetism. Z. Phys 31 253–258.
  • Ji, P. and Jin, J. (2012). UPS delivers optimal phase diagram in high-dimensional variable selection. Ann. Statist. 40 73–103.
  • Jin, J., Zhang, C.-H. and Zhang, Q. (2014). Optimality of graphlet screening in high dimensional variable selection. J. Mach. Learn. Res. 15 2723–2772.
  • Ke, Z. T., Jin, J. and Fan, J. (2014). Supplement to “Covariate assisted screening and estimation.” DOI:10.1214/14-AOS1243SUPP.
  • Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation, 2nd ed. Springer, New York.
  • Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
  • Moulines, E. and Soulier, P. (1999). Broadband log-periodogram regression of time series with long-range dependence. Ann. Statist. 27 1415–1439.
  • Niu, Y. S. and Zhang, H. (2012). The screening and ranking algorithm to detect DNA copy number variations. Ann. Appl. Stat. 6 1306–1326.
  • Olshen, A. B., Venkatraman, E. S., Lucito, R. and Wigler, M. (2004). Circular binary segmentation for the analysis of array-based DNA copy number data. Biostatistics 5 557–572.
  • Ray, B. K. and Tsay, R. S. (2000). Long-range dependence in daily stock volatilities. J. Bus. Econom. Statist. 18 254–262.
  • Siegmund, D. O. (2011). Personal communication.
  • Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression. Biometrika 99 879–898.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58 267–288.
  • Tibshirani, R. and Wang, P. (2008). Spatial smoothing and hot spot detection for CGH data using the fused lasso. Biostatistics 9 18–29.
  • Wasserman, L. and Roeder, K. (2009). High-dimensional variable selection. Ann. Statist. 37 2178–2201.
  • Yao, Y.-C. and Au, S. T. (1989). Least-squares estimation of a step function. Sankhyā Ser. A 51 370–381.
  • Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894–942.
  • Zhang, N. R., Siegmund, D. O., Ji, H. and Li, J. Z. (2010). Detecting simultaneous changepoints in multiple sequences. Biometrika 97 631–645.
  • Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541–2563.
  • Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418–1429.

Supplemental materials

  • Supplementary material: Supplement to “Covariate assisted screening and estimation”. Owing to space constraints, the technical proofs are relegated a supplementary document. It contains Sections A–C.