The Annals of Applied Statistics

Bayesian Synthesis: Combining subjective analyses, with an application to ozone data

Qingzhao Yu, Steven N. MacEachern, and Mario Peruggia

Full-text: Open access

Abstract

Bayesian model averaging enables one to combine the disparate predictions of a number of models in a coherent fashion, leading to superior predictive performance. The improvement in performance arises from averaging models that make different predictions. In this work, we tap into perhaps the biggest driver of different predictions—different analysts—in order to gain the full benefits of model averaging. In a standard implementation of our method, several data analysts work independently on portions of a data set, eliciting separate models which are eventually updated and combined through a specific weighting method. We call this modeling procedure Bayesian Synthesis. The methodology helps to alleviate concerns about the sizable gap between the foundational underpinnings of the Bayesian paradigm and the practice of Bayesian statistics. In experimental work we show that human modeling has predictive performance superior to that of many automatic modeling techniques, including AIC, BIC, Smoothing Splines, CART, Bagged CART, Bayes CART, BMA and LARS, and only slightly inferior to that of BART. We also show that Bayesian Synthesis further improves predictive performance. Additionally, we examine the predictive performance of a simple average across analysts, which we dub Convex Synthesis, and find that it also produces an improvement. Compared to competing modeling methods (including single human analysis), the data-splitting approach has these additional benefits: (1) it exhibits superior predictive performance for real data sets; (2) it makes more efficient use of human knowledge; (3) it avoids multiple uses of the data in the Bayesian framework: and (4) it provides better calibrated assessment of predictive accuracy.

Article information

Source
Ann. Appl. Stat., Volume 5, Number 2B (2011), 1678-1698.

Dates
First available in Project Euclid: 13 July 2011

Permanent link to this document
https://projecteuclid.org/euclid.aoas/1310562738

Digital Object Identifier
doi:10.1214/10-AOAS444

Mathematical Reviews number (MathSciNet)
MR2849791

Zentralblatt MATH identifier
1223.62016

Keywords
Automatic modeling data-splitting human intervention model averaging

Citation

Yu, Qingzhao; MacEachern, Steven N.; Peruggia, Mario. Bayesian Synthesis: Combining subjective analyses, with an application to ozone data. Ann. Appl. Stat. 5 (2011), no. 2B, 1678--1698. doi:10.1214/10-AOAS444. https://projecteuclid.org/euclid.aoas/1310562738


Export citation

References

  • Akaike, H. (1974). A new look at the statistical model identification. IEEE Trans. Automatic Control AC-19 716–723.
  • Breiman, L. (1996). Bagging predictors. Mach. Learn. 26 123–140.
  • Breiman, L. (2001). Statistical modeling: The two cultures. Statist. Sci. 16 199–231.
  • Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees. Wadsworth Advanced Books and Software, Belmont, CA.
  • Chen, M.-H., Shao, Q.-M. and Ibrahim, J. G. (2000). Monte Carlo Methods in Bayesian Computation. Springer, New York.
  • Chipman, H. A., George, E. I. and McCulloch, R. E. (1998). Bayesian CART model search. J. Amer. Statist. Assoc. 93 935–960.
  • Chipman, H. A., George, E. I. and McCulloch, R. E. (2010). BART: Bayesian additive regression trees. Ann. Appl. Statist. 4 266–298.
  • Craven, P. and Wahba, G. (1979). Smoothing noisy data with spline functions. Estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31 377–403.
  • Dawid, A. P. and Vovk, V. G. (1999). Prequential probability: Principles and properties. Bernoulli 5 125–162.
  • Draper, D. (1995). Assessment and propagation of model uncertainty. J. Roy. Statist. Soc. Ser. B 57 45–97.
  • Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. Ann. Statist. 32 407–499.
  • George, E. I. and McCulloch, R. E. (1993). Variable selection via Gibbs sampling. J. Amer. Statist. Assoc. 88 881–889.
  • Gu, C. (2002). Smoothing Spline ANOVA Models. Springer, New York.
  • Hand, D. J. (2006). Classifier technology and the illusion of progress. Statist. Sci. 21 1–34.
  • Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York.
  • Kim, Y. and Kim, J. (2004). Convex hull ensemble machine for regression and classification. Knowledge and Information Systems 6 645–663.
  • Laud, P. W. and Ibrahim, J. G. (1995). Predictive model selection. J. Roy. Statist. Soc. Ser. B 57 247–262.
  • Raftery, A. E., Madigan, D. and Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. J. Amer. Statist. Assoc. 92 179–191.
  • Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist. 6 461–464.
  • Thomas, A., Best, N., Lunn, D., Arnold, R. and Spiegelhalter, D. (2004). GeoBUGS user manual.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
  • Weisberg, S. (1985). Applied Linear Regression. Wiley, New York.
  • Yu, Q. (2006). Bayesian Synthesis. Ph.D. thesis, Ohio State Univ.