The Annals of Statistics

Analyzing bagging

Peter Bühlmann and Bin Yu

Full-text: Open access

Abstract

Bagging is one of the most effective computationally intensive procedures to improve on unstable estimators or classifiers, useful especially for high dimensional data set problems. Here we formalize the notion of instability and derive theoretical results to analyze the variance reduction effect of bagging (or variants thereof) in mainly hard decision problems, which include estimation after testing in regression and decision trees for regression functions and classifiers. Hard decisions create instability, and bagging is shown to smooth such hard decisions, yielding smaller variance and mean squared error. With theoretical explanations, we motivate subagging based on subsampling as an alternative aggregation scheme. It is computationally cheaper but still shows approximately the same accuracy as bagging. Moreover, our theory reveals improvements in first order and in line with simulation studies.

In particular, we obtain an asymptotic limiting distribution at the cube-root rate for the split point when fitting piecewise constant functions. Denoting sample size by n, it follows that in a cylindric neighborhood of diameter $n^{-1/3}$ of the theoretically optimal split point, the variance and mean squared error reduction of subagging can be characterized analytically. Because of the slow rate, our reasoning also provides an explanation on the global scale for the whole covariate space in a decision tree with finitely many splits.

Article information

Source
Ann. Statist. Volume 30, Number 4 (2002), 927-961.

Dates
First available in Project Euclid: 10 September 2002

Permanent link to this document
https://projecteuclid.org/euclid.aos/1031689014

Digital Object Identifier
doi:10.1214/aos/1031689014

Mathematical Reviews number (MathSciNet)
MR1926165

Zentralblatt MATH identifier
1029.62037

Subjects
Primary: 62G08: Nonparametric regression
Secondary: 62G09: Resampling methods 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20] 68T10: Pattern recognition, speech recognition {For cluster analysis, see 62H30}

Keywords
Bootstrap classification decision tree MARS model selection multiple predictions nonparametric regression

Citation

Bühlmann, Peter; Yu, Bin. Analyzing bagging. Ann. Statist. 30 (2002), no. 4, 927--961. doi:10.1214/aos/1031689014. https://projecteuclid.org/euclid.aos/1031689014.


Export citation

References

  • AMIT, Y. and GEMAN, D. (1997). Shape quantization and recognition with randomized trees. Neural Computation 9 1545-1588.
  • BAUER, E. and KOHAVI, R. (1999). An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Machine Learning 36 105-139.
  • BICKEL, P. J., GÖTZE, F. and VAN ZWET, W. R. (1997). Resampling fewer than n observations: gains, losses, and remedies for losses. Statist. Sinica 7 1-32.
  • BREIMAN, L. (1996a). Bagging predictors. Machine Learning 24 123-140.
  • BREIMAN, L. (1996b). Heuristics of instability and stabilization in model selection. Ann. Statist. 24 2350-2383.
  • BREIMAN, L., FRIEDMAN, J. H., OLSHEN, R. A. and STONE, C. J. (1984). Classification and Regression Trees. Wadsworth, Belmont, CA.
  • BÜHLMANN, P. and YU, B. (2000). Discussion of "Additive logistic regression: a statistical view of boosting," by J. Friedman, T. Hastie and R. Tibshirani. Ann. Statist. 28 377-386.
  • BUJA, A. and STUETZLE, W. (2000a). The effect of bagging on variance, bias, and mean squared error. Preprint, AT&T Labs-Research.
  • BUJA, A. and STUETZLE, W. (2000b). Smoothing effects of bagging. Preprint, AT&T LabsResearch.
  • CHAN, K. S. and TSAY, R. S. (1998). Limiting properties of the least squares estimator of a continuous threshold autoregressive model. Biometrika 85 413-426.
  • DIETTERICH, T. G. (1996). Editorial. Machine Learning 24 91-93.
  • FREEDMAN, D. A. (1981). Bootstrapping regression models. Ann. Statist. 9 1218-1228.
  • FREUND, Y. and SCHAPIRE, R. E. (1998). Discussion of "Arcing classifiers," by L. Breiman. Ann. Statist. 26 824-832.
  • FRIEDMAN, J. H. (1991). Multivariate adaptive regression splines (with discussion). Ann. Statist. 19 1-141.
  • FRIEDMAN, J. H. and HALL, P. (2000). On bagging and nonlinear estimation. Preprint.
  • GINÉ, E. and ZINN, J. (1990). Bootstrapping general empirical measures. Ann. Probab. 18 851-869.
  • GROENEBOOM, P. (1989). Brownian motion with a parabolic drift and Airy functions. Probab. Theory Related Fields 81 79-109.
  • HASTIE, T., TIBSHIRANI, R. and BUJA, A. (1994). Flexible discriminant analysis by optimal scoring. J. Amer. Statist. Assoc. 89 1255-1270.
  • KIM, J. and POLLARD, D. (1990). Cube root asy mptotics. Ann. Statist. 18 191-219.
  • LOH, W.-Y. and SHIH, Y.-S. (1997). Split selection methods for classification trees. Statist. Sinica 7 815-840.
  • POLLARD, D. (1990). Empirical Processes: Theory and Applications. IMS, Hay ward, CA.
  • SERFLING, R. J. (1980). Approximation Theorems of Mathematical Statistics. Wiley, New York.
  • BERKELEY, CA 94720-3860 E-MAIL: biny u@stat.berkeley.edu