Electronic Journal of Statistics
- Electron. J. Statist.
- Volume 1 (2007), 307-330.
Generalization error for multi-class margin classification
In this article, we study rates of convergence of the generalization error of multi-class margin classifiers. In particular, we develop an upper bound theory quantifying the generalization error of various large margin classifiers. The theory permits a treatment of general margin losses, convex or nonconvex, in presence or absence of a dominating class. Three main results are established. First, for any fixed margin loss, there may be a trade-off between the ideal and actual generalization performances with respect to the choice of the class of candidate decision functions, which is governed by the trade-off between the approximation and estimation errors. In fact, different margin losses lead to different ideal or actual performances in specific cases. Second, we demonstrate, in a problem of linear learning, that the convergence rate can be arbitrarily fast in the sample size n depending on the joint distribution of the input/output pair. This goes beyond the anticipated rate O(n−1). Third, we establish rates of convergence of several margin classifiers in feature selection with the number of candidate variables p allowed to greatly exceed the sample size n but no faster than exp(n).
Electron. J. Statist., Volume 1 (2007), 307-330.
First available in Project Euclid: 27 August 2007
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Shen, Xiaotong; Wang, Lifeng. Generalization error for multi-class margin classification. Electron. J. Statist. 1 (2007), 307--330. doi:10.1214/07-EJS069. https://projecteuclid.org/euclid.ejs/1188226996