The Annals of Applied Statistics

Ensembling classification models based on phalanxes of variables with applications in drug discovery

Jabed H. Tomal, William J. Welch, and Ruben H. Zamar

Full-text: Open access


Statistical detection of a rare class of objects in a two-class classification problem can pose several challenges. Because the class of interest is rare in the training data, there is relatively little information in the known class response labels for model building. At the same time the available explanatory variables are often moderately high dimensional. In the four assays of our drug-discovery application, compounds are active or not against a specific biological target, such as lung cancer tumor cells, and active compounds are rare. Several sets of chemical descriptor variables from computational chemistry are available to classify the active versus inactive class; each can have up to thousands of variables characterizing molecular structure of the compounds. The statistical challenge is to make use of the richness of the explanatory variables in the presence of scant response information. Our algorithm divides the explanatory variables into subsets adaptively and passes each subset to a base classifier. The various base classifiers are then ensembled to produce one model to rank new objects by their estimated probabilities of belonging to the rare class of interest. The essence of the algorithm is to choose the subsets such that variables in the same group work well together; we call such groups phalanxes.

Article information

Ann. Appl. Stat., Volume 9, Number 1 (2015), 69-93.

First available in Project Euclid: 28 April 2015

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Clustering model selection quantitative structure activity relationship random forest ranking rare class


Tomal, Jabed H.; Welch, William J.; Zamar, Ruben H. Ensembling classification models based on phalanxes of variables with applications in drug discovery. Ann. Appl. Stat. 9 (2015), no. 1, 69--93. doi:10.1214/14-AOAS778.

Export citation


  • Bolton, R. J. and Hand, D. J. (2002). Statistical fraud detection: A review. Statist. Sci. 17 235–249.
  • Breiman, L. (1996a). Bagging predictors. Machine Learning 24 123–140.
  • Breiman, L. (1996b). Out-of-bag estimation. Technical report, Dept. Statistics, Univ. California, Berkeley, Berkeley, CA.
  • Breiman, L. (2001). Random forests. Machine Learning 45 5–32.
  • Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees. Chapman & Hall/CRC, Boca Raton, FL.
  • Bruce, C. L., Melville, J. L., Pickett, S. D. and Hirst, J. D. (2007). Contemporary QSAR classifiers compared. J. Chem. Inf. Model. 47 219–227.
  • Burden, F. R. (1989). Molecular identification number for substructure searches. J. Chem. Inf. Comput. Sci. 29 225–227.
  • Carhart, R. E., Smith, D. H. and Venkataraghavan, R. (1985). Atom pairs as molecular features in structure-activity studies: Definition and applications. J. Chem. Inf. Comput. Sci. 25 64–73.
  • Chawla, N. V., Bowyer, K. W., Hall, L. O. and Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. J. Artificial Intelligence Res. 16 321–357.
  • Chen, C., Liaw, A. and Breiman, L. (2004). Using random forest to learn imbalanced data. Technical report, Dept. Statistics, Univ. California, Berkeley, Berkeley, CA.
  • Deng, H. and Runger, G. (2013). Gene selection with guided regularized random forest. Pattern Recognition 46 3483–3489.
  • Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Machine Learning, Proceedings of the Thirteenth International Conference (ICML’96), Bari, Italy, July 36, 1996 (L. Saitta, ed.) 148–156. Morgan Kaufmann, San Mateo, CA.
  • Goodarzi, M., Dejaegher, B. and Vander Heyden, Y. (2012). Feature selection methods in QSAR studies. Journal of AOAC International 95 636–650.
  • Hastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer, New York.
  • Hawkins, D. M. and Kass, G. V. (1982). Automatic interaction detection. In Topics in Applied Multivariate Analysis (D. M. Hawkins, ed.) 269–302. Cambridge Univ. Press, Cambridge.
  • Hughes-Oliver, J. M., Brooks, A. D., Welch, W. J., Khaledi, M. G., Hawkins, D., Young, S. S., Patil, K., Howell, G. W., Ng, R. T. and Chu, M. T. (2012). ChemModLab: A web-based cheminformatics modeling laboratory. In Silico Biology 11 61–81.
  • Kearsley, S. K., Sallamack, S., Fluder, E. M., Andose, J. D., Mosley, R. T. and Sheridan, R. P. (1996). Chemical similarity using physiochemical property descriptors. J. Chem. Inf. Comput. Sci. 36 118–127.
  • Liaw, A. and Wiener, M. (2002). Classification and regression by randomForest. R News 2 18–22.
  • Liu, K., Feng, J. and Young, S. S. (2005). PowerMV: A software environment for molecular viewing, descriptor generation, data analysis and hit evaluation. J. Chem. Inf. Model. 45 515–522.
  • Meier, L., van de Geer, S. and Bühlmann, P. (2008). The group lasso for logistic regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 70 53–71.
  • Pearlman, R. S. and Smith, K. M. (1999). Metric validation and the receptor-relevant subspace concept. J. Chem. Inf. Comput. Sci. 39 28–35.
  • Podder, M., Welch, W. J., Zamar, R. H. and Tebbutt, S. J. (2006). Dynamic variable selection in SNP genotype autocalling from APEX microarray data. BMC Bioinformatics 7 521, 11 pp.
  • Polishchuk, P. G., Muratov, E. N., Artemenko, A. G., Kolumbin, O. G., Muratov, N. N. and Kuz’min, V. E. (2009). Application of random forest approach to QSAR prediction of aquatic toxicity. J. Chem. Inf. Model. 49 2481–2488.
  • Rusinko, A. III, Farmen, M. W., Lambert, C. G., Brown, P. L. and Young, S. S. (1999). Analysis of a large structure/biological activity data set using recursive partitioning. J. Chem. Inf. Comput. Sci. 39 1017–1026.
  • Svetnik, V., Liaw, A., Tong, C., Culberson, J. C., Sheridan, R. P. and Feuston, B. P. (2003). Random forest: A classification and regression tool for compound classification and QSAR modeling. J. Chem. Inf. Comput. Sci. 43 1947–1958.
  • Tibshirani, R. (1996). Bias, variance, and prediction error for classification rules. Technical report, Dept. Statistics, Univ. Toronto.
  • Wang, Y. (2005). Statistical methods for high throughput screening drug discovery data. Ph.D. thesis, Dept. Statistics and Actuarial Science, Univ. Waterloo.
  • Wolpert, D. H. and Macready, W. G. (1999). An efficient method to estimate bagging’s generalization error. Machine Learning 35 41–55.
  • Young, S. S. and Hawkins, D. M. (1998). Using recursive partitioning to analyze a large SAR data set. SAR and QSAR in Environmental Research 8 183–193.
  • Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 49–67.
  • Zhu, M., Su, W. and Chipman, H. A. (2006). LAGO: A computationally efficient approach for statistical detection. Technometrics 48 193–205.