## The Annals of Statistics

### Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data

Xu Han

#### Abstract

Sure screening technique has been considered as a powerful tool to handle the ultrahigh dimensional variable selection problems, where the dimensionality $p$ and the sample size $n$ can satisfy the NP dimensionality $\log p=O(n^{a})$ for some $a>0$ [J. R. Stat. Soc. Ser. B. Stat. Methodol. 70 (2008) 849–911]. The current paper aims to simultaneously tackle the “universality” and “effectiveness” of sure screening procedures. For the “universality,” we develop a general and unified framework for nonparametric screening methods from a loss function perspective. Consider a loss function to measure the divergence of the response variable and the underlying nonparametric function of covariates. We newly propose a class of loss functions called conditional strictly convex loss, which contains, but is not limited to, negative log likelihood loss from one-parameter exponential families, exponential loss for binary classification and quantile regression loss. The sure screening property and model selection size control will be established within this class of loss functions. For the “effectiveness,” we focus on a goodness-of-fit nonparametric screening (Goffins) method under conditional strictly convex loss. Interestingly, we can achieve a better convergence probability of containing the true model compared with related literature. The superior performance of our proposed method has been further demonstrated by extensive simulation studies and some real scientific data example.

#### Article information

Source
Ann. Statist., Volume 47, Number 4 (2019), 1995-2022.

Dates
Revised: June 2018
First available in Project Euclid: 21 May 2019

https://projecteuclid.org/euclid.aos/1558425637

Digital Object Identifier
doi:10.1214/18-AOS1738

Mathematical Reviews number (MathSciNet)
MR3953442

Subjects
Primary: 62G99: None of the above, but in this section

#### Citation

Han, Xu. Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data. Ann. Statist. 47 (2019), no. 4, 1995--2022. doi:10.1214/18-AOS1738. https://projecteuclid.org/euclid.aos/1558425637

#### References

• Anderson, M. J. and Robinson, J. (2001). Permutation tests for linear models. Aust. N. Z. J. Stat. 43 75–88.
• Barut, E., Fan, J. and Verhasselt, A. (2016). Conditional sure independence screening. J. Amer. Statist. Assoc. 111 1266–1277.
• Brègman, L. M. (1967). A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming. Ž. Vyčisl. Mat. Mat. Fiz. 7 620–631.
• mr Buldygin, V.and Kozachenko, Y. (2000). Metric characterization of random variables and random processes.. Translations of Mathematical Monographs 188.
• Candès, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when $p$ is much larger than $n$. Ann. Statist. 35 2313–2404.
• Chang, J., Tang, C. Y. and Wu, Y. (2013). Marginal empirical likelihood and sure independence feature screening. Ann. Statist. 41 2123–2148.
• de Boor, C. (1978). A Practical Guide to Splines. Applied Mathematical Sciences 27. Springer, New York.
• Fan, J. and Fan, Y. (2008). High-dimensional classification using features annealed independence rules. Ann. Statist. 36 2605–2637.
• Fan, J., Feng, Y. and Song, R. (2011). Nonparametric independence screening in sparse ultra-high-dimensional additive models. J. Amer. Statist. Assoc. 106 544–557.
• Fan, J., Feng, Y. and Tong, X. (2012). A road to classification in high dimensional space: The regularized optimal affine discriminant. J. R. Stat. Soc. Ser. B. Stat. Methodol. 74 745–771.
• Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360.
• Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B. Stat. Methodol. 70 849–911.
• Fan, J., Ma, Y. and Dai, W. (2014). Nonparametric independence screening in sparse ultra-high-dimensional varying coefficient models. J. Amer. Statist. Assoc. 109 1270–1284.
• Fan, J., Samworth, R. and Wu, Y. (2009). Ultrahigh dimensional feature selection: Beyond the linear model. J. Mach. Learn. Res. 10 2013–2038.
• Fan, J. and Song, R. (2010). Sure independence screening in generalized linear models with NP-dimensionality. Ann. Statist. 38 3567–3604.
• Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. System Sci. 55 119–139.
• Gao, Q., Wu, Y., Zhu, C. and Wang, Z. (2008). Asymptotic normality of maximum quasi-likelihood estimators in generalized linear models with fixed design. J. Syst. Sci. Complex. 21 463–473.
• Gordon, G. et al. (2002). Translation of Microarray Data into Clinically Relevant Cancer Diagnostic Tests Using Gene Expression Ratios in Lung Cancer and Mesothelioma. Cancer Research 62 4963–4967.
• Han, X. (2019). Supplement to “Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data.” DOI:10.1214/18-AOS1738SUPP.
• He, X., Wang, L. and Hong, H. G. (2013). Quantile-adaptive model-free variable screening for high-dimensional heterogeneous data. Ann. Statist. 41 342–369.
• Heyde, C. C. (1997). Quasi-Likelihood and Its Application: A General Approach to Optimal Parameter Estimation. Springer, New York.
• Koenker, R. (2005). Quantile Regression. Econometric Society Monographs 38. Cambridge Univ. Press, Cambridge.
• Laurent, B. and Massart, P. (2000). Adaptive estimation of a quadratic functional by model selection. Ann. Statist. 28 1302–1338.
• Li, R., Zhong, W. and Zhu, L. (2012). Feature screening via distance correlation learning. J. Amer. Statist. Assoc. 107 1129–1139.
• Li, G., Peng, H., Zhang, J. and Zhu, L. (2012). Robust rank correlation based screening. Ann. Statist. 40 1846–1877.
• Mai, Q. and Zou, H. (2015). The fused Kolmogorov filter: A nonparametric model-free screening method. Ann. Statist. 43 1471–1497.
• Meier, L., van de Geer, S. and Bühlmann, P. (2009). High-dimensional additive modeling. Ann. Statist. 37 3779–3821.
• Song, R., Lu, W., Ma, S. and Jeng, X. J. (2014). Censored rank independence screening for high-dimensional survival data. Biometrika 101 799–814.
• Stone, C. J. (1986). The dimensionality reduction principle for generalized additive models. Ann. Statist. 14 590–606.
• Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
• Weng, H., Feng, Y. and Qiao, X. (2017). Regularization after retention in ultrahigh dimensional linear regression models. Statist. Sinica. In press.
• Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894–942.
• Zhang, C., Jiang, Y. and Shang, Z. (2009). New aspects of Bregman divergence in regression and classification with parametric and nonparametric estimation. Canad. J. Statist. 37 119–139.
• Zhao, S. D. and Li, Y. (2012). Principled sure independence screening for Cox models with ultra-high-dimensional covariates. J. Multivariate Anal. 105 397–411.
• Zhu, L.-P., Li, L., Li, R. and Zhu, L.-X. (2011). Model-free feature screening for ultrahigh-dimensional data. J. Amer. Statist. Assoc. 106 1464–1475.
• Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418–1429.
• Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B. Stat. Methodol. 67 301–320.
• Zou, H. and Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist. 36 1509–1533.

#### Supplemental materials

• Supplement to “Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data”. Due to the space limit, all the technical proofs as well as some numerical results are relegated to the Supplementary Material [Han (2019)].