Open Access
May 2011 Margin-adaptive model selection in statistical learning
Sylvain Arlot, Peter L. Bartlett
Bernoulli 17(2): 687-713 (May 2011). DOI: 10.3150/10-BEJ288

Abstract

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.

Citation

Download Citation

Sylvain Arlot. Peter L. Bartlett. "Margin-adaptive model selection in statistical learning." Bernoulli 17 (2) 687 - 713, May 2011. https://doi.org/10.3150/10-BEJ288

Information

Published: May 2011
First available in Project Euclid: 5 April 2011

zbMATH: 1345.62087
MathSciNet: MR2787611
Digital Object Identifier: 10.3150/10-BEJ288

Keywords: Adaptivity , empirical minimization , empirical risk minimization , local Rademacher complexity , margin condition , Model selection , Oracle inequalities , Statistical learning

Rights: Copyright © 2011 Bernoulli Society for Mathematical Statistics and Probability

Vol.17 • No. 2 • May 2011
Back to Top