Institute of Mathematical Statistics Collections

Dilution priors: Compensating for model space redundancy

Edward I. George

Full-text: Open access

Abstract

For the general Bayesian model uncertainty framework, the focus of this paper is on the development of model space priors which can compensate for redundancy between model classes, the so-called dilution priors proposed in George (1999). Several distinct approaches for dilution prior construction are suggested. One is based on tessellation determined neighborhoods, another on collinearity adjustments, and a third on pairwise distances between models.

Chapter information

Source
James O. Berger, T. Tony Cai and Iain M. Johnstone, eds., Borrowing Strength: Theory Powering Applications – A Festschrift for Lawrence D. Brown (Beachwood, Ohio, USA: Institute of Mathematical Statistics, 2010), 158-165

Dates
First available in Project Euclid: 26 October 2010

Permanent link to this document
https://projecteuclid.org/euclid.imsc/1288099018

Digital Object Identifier
doi:10.1214/10-IMSCOLL611

Subjects
Primary: 62F15: Bayesian inference 62J05: Linear regression

Keywords
model averaging model selection objective Bayes prior distribution variable selection

Rights
Copyright © 2010, Institute of Mathematical Statistics

Citation

George, Edward I. Dilution priors: Compensating for model space redundancy. Borrowing Strength: Theory Powering Applications – A Festschrift for Lawrence D. Brown, 158--165, Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2010. doi:10.1214/10-IMSCOLL611. https://projecteuclid.org/euclid.imsc/1288099018.


Export citation

References

  • [1] Barbieri, M. and Berger, J. (2004). Optimal predictive model selection. Ann. Statist. 32 870–897.
  • [2] Chipman, H., George, E. I. and McCulloch, R. E. (1998). Bayesian CART Model Search (with discussion). J. Amer. Statist. Assoc. 93 935–960.
  • [3] Chipman, H., George, E. I. and McCulloch, R. E. (2001). The practical implementation of Bayesian model selection. In Model Selection (with discussion). IMS Lecture Notes – Monograph Series (P. Lahiri, ed.) 38 65–134. IMS.
  • [4] Garthwaite, P. H. and Mubwandarikwa, E. (2010). Selection of prior weights for weighted model averaging. Austr. N. Z. J. Stat. To appear.
  • [5] George, E. I. and McCulloch, R. E. (1993). Variable selection via Gibbs sampling. J. Amer. Statist. Soc. 88 881–889.
  • [6] George, E. I. and McCulloch, R. E. (1997). Approaches for Bayesian variable selection. Statist. Sinica 7 339–373.
  • [7] George, E. I. (1999). Sampling considerations for model averaging and model search. Invited discussion of “Model Averaging and Model Search, by M. Clyde.” In Bayesian Statistics 6 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 175–177. Oxford Univ. Press.
  • [8] Hoeting, J. A., Madigan, D., Raftery, A. E. and Volinsky, C. T. (1999). Bayesian model averaging: A tutorial. Statist. Sci. 14 382–417.
  • [9] Raftery, A. E., Madigan, D. M. and Hoeting, J. (1997). Bayesian model averaging for linear regression models. J. Amer. Statist. Assoc. 179–191.
  • [10] Smith, M. and Kohn, R. (1996). Nonparametric regression using Bayesian variable selection. J. Econom. 75 317–344.