Open Access
June 1998 Arcing classifier (with discussion and a rejoinder by the author)
Leo Breiman
Ann. Statist. 26(3): 801-849 (June 1998). DOI: 10.1214/aos/1024691079

Abstract

Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging. Here, modified training sets are formed by resampling from the original training set, classifiers constructed using these training sets and then combined by voting. Freund and Schapire propose an algorithm the basis of which is to adaptively resample and combine (hence the acronym “arcing”) so that the weights in the resampling are increased for those cases most often misclassified and the combining is done by weighted voting. Arcing is more successful than bagging in test set error reduction. We explore two arcing algorithms, compare them to each other and to bagging, and try to understand how arcing works. We introduce the definitions of bias and variance for a classifier as components of the test set error. Unstable classifiers can have low bias on a large range of data sets. Their problem is high variance. Combining multiple versions either through bagging or arcing reduces variance significantly.

Citation

Download Citation

Leo Breiman. "Arcing classifier (with discussion and a rejoinder by the author)." Ann. Statist. 26 (3) 801 - 849, June 1998. https://doi.org/10.1214/aos/1024691079

Information

Published: June 1998
First available in Project Euclid: 21 June 2002

MathSciNet: MR1635406
Digital Object Identifier: 10.1214/aos/1024691079

Subjects:
Primary: 62H30

Keywords: bagging , boosting , decision trees , Ensemble methods , error-correcting , Markov chain , Monte Carlo , neural networks , output coding

Rights: Copyright © 1998 Institute of Mathematical Statistics

Vol.26 • No. 3 • June 1998
Back to Top