## Abstract and Applied Analysis

### A Simpler Approach to Coefficient Regularized Support Vector Machines Regression

#### Abstract

We consider a kind of support vector machines regression (SVMR) algorithms associated with ${l}^{q} (1\le q<\infty )$ coefficient-based regularization and data-dependent hypothesis space. Compared with former literature, we provide here a simpler convergence analysis for those algorithms. The novelty of our analysis lies in the estimation of the hypothesis error, which is implemented by setting a stepping stone between the coefficient regularized SVMR and the classical SVMR. An explicit learning rate is then derived under very mild conditions.

#### Article information

Source
Abstr. Appl. Anal., Volume 2014 (2014), Article ID 206015, 8 pages.

Dates
First available in Project Euclid: 2 October 2014

Permanent link to this document
https://projecteuclid.org/euclid.aaa/1412277018

Digital Object Identifier
doi:10.1155/2014/206015

Mathematical Reviews number (MathSciNet)
MR3216037

Zentralblatt MATH identifier
07021924

#### Citation

Tong, Hongzhi; Chen, Di-Rong; Yang, Fenghong. A Simpler Approach to Coefficient Regularized Support Vector Machines Regression. Abstr. Appl. Anal. 2014 (2014), Article ID 206015, 8 pages. doi:10.1155/2014/206015. https://projecteuclid.org/euclid.aaa/1412277018

#### References

• N. Aronszajn, “Theory of reproducing kernels,” Transactions of the American Mathematical Society, vol. 68, pp. 337–404, 1950.
• V. Vapnik, S. Golowich, and A. Smola, “Support vector method for function approximation, regression estimation, and signal processing,” in Advances in Neural Information Proceeding Systems, M. Mozer, M. Jordan, and T. Petsche, Eds., vol. 9, pp. 81–287, MIT Press, Cambridge, Mass, USA, 1997.
• V. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, USA, 1998.
• N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press, Cambridge, UK, 2000.
• I. Steinwart and A. Christmann, Support Vector Machines, Springer, New York, NY, USA, 2008.
• H. Z. Tong, D. R. Chen, and L. Z. Peng, “Analysis of support vector machines regression,” Foundations of Computational Mathematics, vol. 9, no. 2, pp. 243–257, 2009.
• D.-H. Xiang, T. Hu, and D.-X. Zhou, “Approximation analysis of learning algorithms for support vector regression and quantile regression,” Journal of Applied Mathematics, vol. 2012, Article ID 902139, 17 pages, 2012.
• I. Daubechies, M. Defrise, and C. Demol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004.
• D. Donoho, “For most large underdetermined systerms of linear equations, the minimal ${l}^{1}$-norm solution is also the sparsest solution,” Tech. Rep., Stanford University, 2004.
• R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B: Statistical Methodology, vol. 58, no. 1, pp. 267–288, 1996.
• E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
• Q. Wu and D.-X. Zhou, “Learning with sample dependent hypothesis spaces,” Computers and Mathematics with Applications, vol. 56, no. 11, pp. 2896–2907, 2008.
• Q.-W. Xiao and D.-X. Zhou, “Learning by nonsymmetric kernels with data dependent spaces and ${l}^{1}$-regularizer,” Taiwanese Journal of Mathematics, vol. 14, no. 5, pp. 1821–1836, 2010.
• L. Shi, Y.-L. Feng, and D.-X. Zhou, “Concentration estimates for learning with ${l}^{1}$-regularizer and data dependent hypothesis spaces,” Applied and Computational Harmonic Analysis, vol. 31, no. 2, pp. 286–302, 2011.
• H. Tong, D.-R. Chen, and F. Yang, “Support vector machines regression with ${l}^{1}$-regularizer,” Journal of Approximation Theory, vol. 164, no. 10, pp. 1331–1344, 2012.
• H.-Y. Wang, Q.-W. Xiao, and D.-X. Zhou, “An approximation theory approach to learning with ${l}^{1}$ regularization,” Journal of Approximation Theory, vol. 167, pp. 240–258, 2013.
• P. L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525–536, 1998.
• I. Steinwart, D. Hush, and C. Scovel, “An oracle inequality for clipped regularized risk minimizers,” in Advances in Neural Information Proceeding Systems, B. Scholköpf, J. Platt, and T. Hoffman, Eds., vol. 19, pp. 1321–1328, MIT Press, Cambridge, Mass, USA, 2007.
• S. Smale and D. X. Zhou, “Estimating the approximation error in learning theory,” Analysis and Applications, vol. 1, no. 1, pp. 17–41, 2003.
• D.-X. Zhou, “Density problem and approximation error in learning theory,” Abstract and Applied Analysis, vol. 2013, Article ID 715683, 13 pages, 2013.
• Q. Wu and D.-X. Zhou, “SVM soft margin classifiers: linear programming versus quadratic programming,” Neural Computation, vol. 17, no. 5, pp. 1160–1187, 2005.
• H. Z. Tong, D.-R. Chen, and F. H. Yang, “Least square regression with ${l}^{p}$-coefficient regularization,” Neural Computation, vol. 22, no. 12, pp. 3221–3235, 2010.
• Y.-L. Feng and S.-G. Lv, “Unified approach to coefficient-based regularized regression,” Computers and Mathematics with Applications, vol. 62, no. 1, pp. 506–515, 2011.
• I. Steinwart and A. Christmann, “Estimating conditional quantiles with the help of the pinball loss,” Bernoulli, vol. 17, no. 1, pp. 211–225, 2011.
• Q. Wu, Y. Ying, and D.-X. Zhou, “Multi-kernel regularized classifiers,” Journal of Complexity, vol. 23, no. 1, pp. 108–134, 2007.
• H. W. Sun and Q. Wu, “Least square regression with indefinite kernels and coefficient regularization,” Applied and Computational Harmonic Analysis, vol. 30, no. 1, pp. 96–109, 2011.
• H. W. Sun and Q. Wu, “Indefinite kernel network with dependent sampling,” Analysis and Applications, vol. 11, no. 5, Article ID 1350020, 15 pages, 2013.
• D.-X. Zhou, “Capacity of reproducing kernel spaces in learning theory,” IEEE Transactions on Information Theory, vol. 49, no. 7, pp. 1743–1752, 2003. \endinput