## The Annals of Statistics

### Heuristics of instability and stabilization in model selection

Leo Breiman

#### Abstract

In model selection, usually a "best" predictor is chosen from a collection ${\hat{\mu}(\cdot, s)}$ of predictors where $\hat{\mu}(\cdot, s)$ is the minimum least-squares predictor in a collection $\mathsf{U}_s$ of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in $\mathsf{U}_s$.

If $\mathsf{L}$ is the data used to derive the sequence ${\hat{\mu}(\cdot, s)}$, the procedure is called unstable if a small change in $\mathsf{L}$ can cause large changes in ${\hat{\mu}(\cdot, s)}$. With a crystal ball, one could pick the predictor in ${\hat{\mu}(\cdot, s)}$ having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence ${\hat{\mu'}(\cdot, s)}$ and then averaging over many such predictor sequences.

#### Article information

Source
Ann. Statist. Volume 24, Number 6 (1996), 2350-2383.

Dates
First available in Project Euclid: 16 September 2002

http://projecteuclid.org/euclid.aos/1032181158

Digital Object Identifier
doi:10.1214/aos/1032181158

Mathematical Reviews number (MathSciNet)
MR1425957

Zentralblatt MATH identifier
0867.62055

Subjects
Primary: 62H99: None of the above, but in this section

#### Citation

Breiman, Leo. Heuristics of instability and stabilization in model selection. Ann. Statist. 24 (1996), no. 6, 2350--2383. doi:10.1214/aos/1032181158. http://projecteuclid.org/euclid.aos/1032181158.

#### References

• BREIMAN, L. 1992. The little bootstrap and other methods for dimensionality selection in regression: x-fixed prediction error. J. Amer. Statist. Assoc. 87 738 754. Z.
• BREIMAN, L. 1995. Better subset selection using the non-negative garotte. Technometrics 37 373 384. Z.
• BREIMAN, L. 1996a. Stacked regressions. Machine Learning 24 41 64. Z.
• BREIMAN, L. 1996b. Bagging predictors. Machine Learning 26 123 140. Z.
• BREIMAN, L. 1996c. Bias, variance and arcing classifiers. Report 460, Dept. Statistics, Univ. California. Z.
• BREIMAN, L. and SPECTOR, P. 1992. Submodel selection and evaluation in regression. The random X case. Internat. Statist. Rev. 60 291 319. Z.
• WOLPERT, D. 1992. Stacked generalization. Neural Networks 5 241 259.
• BERKELEY, CALIFORNIA 94720-3860