Abstract
The concept of pointwise Fisher consistency (or classification calibration) states necessary and sufficient conditions to have Bayes consistency when a classifier minimizes a surrogate loss function instead of the 0-1 loss. We present a family of multiclass hinge loss functions defined by a continuous control parameter $\lambda$ representing the margin of the positive points of a given class. The parameter $\lambda$ allows shifting from classification uncalibrated to classification calibrated loss functions. Though previous results suggest that increasing the margin of positive points has positive effects on the classification model, other approaches have failed to give increasing weight to the positive examples without losing the classification calibration property. Our $\lambda$-based loss function can give unlimited weight to the positive examples without breaking the classification calibration property. Moreover, when embedding these loss functions into the Support Vector Machine’s framework ($\lambda$-SVM), the parameter $\lambda$ defines different regions for the Karush—Kuhn—Tucker conditions. A large margin on positive points also facilitates faster convergence of the Sequential Minimal Optimization algorithm, leading to lower training times than other classification calibrated methods. $\lambda$-SVM allows easy implementation, and its practical use in different datasets not only supports our theoretical analysis, but also provides good classification performance and fast training times.
Citation
Irene Rodriguez-Lujan. Ramon Huerta. "A Fisher consistent multiclass loss function with variable margin on positive examples." Electron. J. Statist. 9 (2) 2255 - 2292, 2015. https://doi.org/10.1214/15-EJS1073
Information