Bayesian model averaging enables one to combine the disparate predictions of a number of models in a coherent fashion, leading to superior predictive performance. The improvement in performance arises from averaging models that make different predictions. In this work, we tap into perhaps the biggest driver of different predictions—different analysts—in order to gain the full benefits of model averaging. In a standard implementation of our method, several data analysts work independently on portions of a data set, eliciting separate models which are eventually updated and combined through a specific weighting method. We call this modeling procedure Bayesian Synthesis. The methodology helps to alleviate concerns about the sizable gap between the foundational underpinnings of the Bayesian paradigm and the practice of Bayesian statistics. In experimental work we show that human modeling has predictive performance superior to that of many automatic modeling techniques, including AIC, BIC, Smoothing Splines, CART, Bagged CART, Bayes CART, BMA and LARS, and only slightly inferior to that of BART. We also show that Bayesian Synthesis further improves predictive performance. Additionally, we examine the predictive performance of a simple average across analysts, which we dub Convex Synthesis, and find that it also produces an improvement. Compared to competing modeling methods (including single human analysis), the data-splitting approach has these additional benefits: (1) it exhibits superior predictive performance for real data sets; (2) it makes more efficient use of human knowledge; (3) it avoids multiple uses of the data in the Bayesian framework: and (4) it provides better calibrated assessment of predictive accuracy.
"Bayesian Synthesis: Combining subjective analyses, with an application to ozone data." Ann. Appl. Stat. 5 (2B) 1678 - 1698, June 2011. https://doi.org/10.1214/10-AOAS444