Abstract
Boosting is one of the important ensemble classifiers emerging in the past decade. [10] provides a statistical insight: AdaBoost can be viewed as a Newton-like updates minimizing exponential criterion. This powerful insight, however, does not address (1) whether the Newton update converges (2) whether the update procedure converge to the Bayes procedure if it does converge. Under a normal-normal setting, we cast the learning problem as a Bayesian minimization problem. It is shown that the Bayes procedure can be obtained via an iterative Newton update minimizing exponential criterion. In addition, the step sizes of AdaBoost are shown to be highly effective and lead to a one-step convergence. While our results based on strong distributional assumption, they require little conditions on the complexity of base learners nor regularization on step sizes or number of boosting iterations.
Citation
C. Andy Tsao. W. Drago Chen. "Consistency of Boosting under Normality." Taiwanese J. Math. 14 (6) 2125 - 2136, 2010. https://doi.org/10.11650/twjm/1500406066
Information