Annals of Statistics
- Ann. Statist.
- Volume 36, Number 6 (2008), 2605-2637.
High-dimensional classification using features annealed independence rules
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is poorly understood. In a seminal paper, Bickel and Levina [Bernoulli 10 (2004) 989–1010] show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as poor as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as poorly as the random guessing. Thus, it is important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
Ann. Statist., Volume 36, Number 6 (2008), 2605-2637.
First available in Project Euclid: 5 January 2009
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Fan, Jianqing; Fan, Yingying. High-dimensional classification using features annealed independence rules. Ann. Statist. 36 (2008), no. 6, 2605--2637. doi:10.1214/07-AOS504. https://projecteuclid.org/euclid.aos/1231165181