Open Access
February 2011 Rates of convergence in active learning
Steve Hanneke
Ann. Statist. 39(1): 333-361 (February 2011). DOI: 10.1214/10-AOS843

Abstract

We study the rates of convergence in generalization error achievable by active learning under various types of label noise. Additionally, we study the general problem of model selection for active learning with a nested hierarchy of hypothesis classes and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning.

Citation

Download Citation

Steve Hanneke. "Rates of convergence in active learning." Ann. Statist. 39 (1) 333 - 361, February 2011. https://doi.org/10.1214/10-AOS843

Information

Published: February 2011
First available in Project Euclid: 3 December 2010

zbMATH: 1274.62510
MathSciNet: MR2797849
Digital Object Identifier: 10.1214/10-AOS843

Subjects:
Primary: 62H30 , 62L05 , 68Q32 , 68T05
Secondary: 62G99 , 68Q10 , 68Q25 , 68T10 , 68W40

Keywords: Active learning , ‎classification‎ , Model selection , Oracle inequalities , selective sampling , sequential design , statistical learning theory

Rights: Copyright © 2011 Institute of Mathematical Statistics

Vol.39 • No. 1 • February 2011
Back to Top