Open Access
August 2010 Sharper lower bounds on the performance of the empirical risk minimization algorithm
Guillaume Lecué, Shahar Mendelson
Bernoulli 16(3): 605-613 (August 2010). DOI: 10.3150/09-BEJ225

Abstract

We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function $T$ and the learning class $F$, the excess risk of the empirical risk minimization algorithm is lower bounded by $$\frac{\mathbb{E}\sup_{q\in Q}G_{q}}{\sqrt{n}}\delta$$ where $(G_q)_{q∈Q}$ is a canonical Gaussian process associated with $Q$ (a well chosen subset of $F$) and $δ$ is a parameter governing the oscillations of the empirical excess risk function over a small ball in $F$.

Citation

Download Citation

Guillaume Lecué. Shahar Mendelson. "Sharper lower bounds on the performance of the empirical risk minimization algorithm." Bernoulli 16 (3) 605 - 613, August 2010. https://doi.org/10.3150/09-BEJ225

Information

Published: August 2010
First available in Project Euclid: 6 August 2010

zbMATH: 1220.62007
MathSciNet: MR2730641
Digital Object Identifier: 10.3150/09-BEJ225

Keywords: empirical risk minimization , Learning theory , lower bound , Multidimensional central limit theorem , uniform central limit theorem

Rights: Copyright © 2010 Bernoulli Society for Mathematical Statistics and Probability

Vol.16 • No. 3 • August 2010
Back to Top