Open Access
June 2013 Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness
Taiji Suzuki, Masashi Sugiyama
Ann. Statist. 41(3): 1381-1405 (June 2013). DOI: 10.1214/13-AOS1095

Abstract

We investigate the learning rate of multiple kernel learning (MKL) with $\ell_{1}$ and elastic-net regularizations. The elastic-net regularization is a composition of an $\ell_{1}$-regularizer for inducing the sparsity and an $\ell_{2}$-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both $\ell_{1}$ and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than $\ell_{1}$-regularization; otherwise, a faster convergence rate for the $\ell_{1}$-regularization is shown.

Citation

Download Citation

Taiji Suzuki. Masashi Sugiyama. "Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness." Ann. Statist. 41 (3) 1381 - 1405, June 2013. https://doi.org/10.1214/13-AOS1095

Information

Published: June 2013
First available in Project Euclid: 1 August 2013

zbMATH: 1273.62090
MathSciNet: MR3113815
Digital Object Identifier: 10.1214/13-AOS1095

Subjects:
Primary: 62F12 , 62G08
Secondary: 62J07

Keywords: Additive model , convergence rate , elastic-net , multiple kernel learning , reproducing kernel Hilbert spaces , restricted isometry , smoothness , Sparse learning

Rights: Copyright © 2013 Institute of Mathematical Statistics

Vol.41 • No. 3 • June 2013
Back to Top