Translator Disclaimer
2012 Error Bounds for l p -Norm Multiple Kernel Learning with Least Square Loss
Shao-Gao Lv, Jin-De Zhu
Abstr. Appl. Anal. 2012: 1-18 (2012). DOI: 10.1155/2012/915920

Abstract

The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an l p -norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications (Kloft et al., 2009, 2011). In this paper, we present a theoretical analysis on the approximation error and learning ability of the l p -norm MKL. Our analysis shows explicit learning rates for l p -norm MKL and demonstrates some notable advantages compared with traditional kernel-based learning algorithms where the kernel is fixed.

Citation

Download Citation

Shao-Gao Lv. Jin-De Zhu. "Error Bounds for l p -Norm Multiple Kernel Learning with Least Square Loss." Abstr. Appl. Anal. 2012 1 - 18, 2012. https://doi.org/10.1155/2012/915920

Information

Published: 2012
First available in Project Euclid: 14 December 2012

zbMATH: 1280.68177
MathSciNet: MR2959739
Digital Object Identifier: 10.1155/2012/915920

Rights: Copyright © 2012 Hindawi

JOURNAL ARTICLE
18 PAGES


SHARE
Vol.2012 • 2012
Back to Top