Open Access
December 2010 Sparsity in multiple kernel learning
Vladimir Koltchinskii, Ming Yuan
Ann. Statist. 38(6): 3660-3695 (December 2010). DOI: 10.1214/10-AOS825

Abstract

The problem of multiple kernel learning based on penalized empirical risk minimization is discussed. The complexity penalty is determined jointly by the empirical L2 norms and the reproducing kernel Hilbert space (RKHS) norms induced by the kernels with a data-driven choice of regularization parameters. The main focus is on the case when the total number of kernels is large, but only a relatively small number of them is needed to represent the target function, so that the problem is sparse. The goal is to establish oracle inequalities for the excess risk of the resulting prediction rule showing that the method is adaptive both to the unknown design distribution and to the sparsity of the problem.

Citation

Download Citation

Vladimir Koltchinskii. Ming Yuan. "Sparsity in multiple kernel learning." Ann. Statist. 38 (6) 3660 - 3695, December 2010. https://doi.org/10.1214/10-AOS825

Information

Published: December 2010
First available in Project Euclid: 30 November 2010

zbMATH: 1204.62086
MathSciNet: MR2766864
Digital Object Identifier: 10.1214/10-AOS825

Subjects:
Primary: 62F12 , 62G08
Secondary: 62J07

Keywords: high dimensionality , multiple kernel learning , Oracle inequality , reproducing kernel Hilbert spaces , restricted isometry , Sparsity

Rights: Copyright © 2010 Institute of Mathematical Statistics

Vol.38 • No. 6 • December 2010
Back to Top