Open Access
February 2004 Optimal aggregation of classifiers in statistical learning
Alexander B. Tsybakov
Ann. Statist. 32(1): 135-166 (February 2004). DOI: 10.1214/aos/1079120131

Abstract

Classification can be considered as nonparametric estimation of sets, where the risk is defined by means of a specific distance between sets associated with misclassification error. It is shown that the rates of convergence of classifiers depend on two parameters: the complexity of the class of candidate sets and the margin parameter. The dependence is explicitly given, indicating that optimal fast rates approaching $O(n^{-1})$ can be attained, where n is the sample size, and that the proposed classifiers have the property of robustness to the margin. The main result of the paper concerns optimal aggregation of classifiers: we suggest a classifier that automatically adapts both to the complexity and to the margin, and attains the optimal fast rates, up to a logarithmic factor.

Citation

Download Citation

Alexander B. Tsybakov. "Optimal aggregation of classifiers in statistical learning." Ann. Statist. 32 (1) 135 - 166, February 2004. https://doi.org/10.1214/aos/1079120131

Information

Published: February 2004
First available in Project Euclid: 12 March 2004

zbMATH: 1105.62353
MathSciNet: MR2051002
Digital Object Identifier: 10.1214/aos/1079120131

Subjects:
Primary: 62G07
Secondary: 62G08 , 62H30 , 68T10

Keywords: aggregation of classifiers , ‎classification‎ , complexity of classes of sets , Empirical processes , margins , Optimal rates , Statistical learning

Rights: Copyright © 2004 Institute of Mathematical Statistics

Vol.32 • No. 1 • February 2004
Back to Top