Open Access
December 1996 Heuristics of instability and stabilization in model selection
Leo Breiman
Ann. Statist. 24(6): 2350-2383 (December 1996). DOI: 10.1214/aos/1032181158

Abstract

In model selection, usually a "best" predictor is chosen from a collection ${\hat{\mu}(\cdot, s)}$ of predictors where $\hat{\mu}(\cdot, s)$ is the minimum least-squares predictor in a collection $\mathsf{U}_s$ of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in $\mathsf{U}_s$.

If $\mathsf{L}$ is the data used to derive the sequence ${\hat{\mu}(\cdot, s)}$, the procedure is called unstable if a small change in $\mathsf{L}$ can cause large changes in ${\hat{\mu}(\cdot, s)}$. With a crystal ball, one could pick the predictor in ${\hat{\mu}(\cdot, s)}$ having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence ${\hat{\mu'}(\cdot, s)}$ and then averaging over many such predictor sequences.

Citation

Download Citation

Leo Breiman. "Heuristics of instability and stabilization in model selection." Ann. Statist. 24 (6) 2350 - 2383, December 1996. https://doi.org/10.1214/aos/1032181158

Information

Published: December 1996
First available in Project Euclid: 16 September 2002

zbMATH: 0867.62055
MathSciNet: MR1425957
Digital Object Identifier: 10.1214/aos/1032181158

Subjects:
Primary: 62H99

Keywords: cross-validation , prediction error , predictive loss , regression , subset selection

Rights: Copyright © 1996 Institute of Mathematical Statistics

Vol.24 • No. 6 • December 1996
Back to Top