Bayesian Analysis

Modularization in Bayesian analysis, with emphasis on analysis of computer models

M. J. Bayarri, J. O. Berger, and F. Liu

Full-text: Open access


Bayesian analysis incorporates different sources of information into a single analysis through Bayes theorem. When one or more of the sources of information are suspect (e.g., if the model assumed for the information is viewed as quite possibly being significantly flawed), there can be a concern that Bayes theorem allows this suspect information to overly influence the other sources of information. We consider a variety of situations in which this arises, and give methodological suggestions for dealing with the problem.

After consideration of some pedagogical examples of the phenomenon, we focus on the interface of statistics and the development of complex computer models of processes. Three testbed computer models are considered, in which this type of issue arises.

Article information

Bayesian Anal. Volume 4, Number 1 (2009), 119-150.

First available in Project Euclid: 22 June 2012

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Complex computer models Confounding Emulators Identifiability MCMC mixing Partial likelihood Random effects


Liu, F.; Bayarri, M. J.; Berger, J. O. Modularization in Bayesian analysis, with emphasis on analysis of computer models. Bayesian Anal. 4 (2009), no. 1, 119--150. doi:10.1214/09-BA404.

Export citation


  • Bayarri, M., Berger, J., Garcia-Donato, G., Liu, F., Palomo, J., Paulo, R., Sacks, J., Walsh, D., Cafeo, J., and Parthasarathy, R. (2007a). “Computer Model Validation with Functional Outputs.” Annals of Statistics, 35: 1874–1906.
  • Bayarri, M. J., Berger, J. O., Kennedy, M., Kottas, A., Paulo, R., Sacks, J., Cafeo, J. A., Lin, C. H., and Tu, J. (2005). “Validation of a Computer Model for Vehicle Collision.” Technical report, National Institute of Statistical Sciences, Research Triangle Park, NC, USA.
  • Bayarri, M. J., Berger, J. O., Paulo, R., Sacks, J., Cafeo, J. A., Cavendish, J., Lin, C.-H., and Tu, J. (2007b). “A Framework for Validation of Computer Models.” Technometrics, 49(2): 138–154.
  • Conti, S., Anderson, C., O’Hagan, A., and Kennedy, M. (2005). “Bayesian Analysis of Complex Dynamic Computer Models.” In Hanson, K. and Hemez, F. (eds.), Sensitivity Analysis of Model Output, 147–156. Los Alamos National Laboratory. Available at
  • Cox, D. (1972). “Regression models and life tables (with discussion).” Journal of the Royal Statistical Society B, 34: 187–220.
  • — (1975). “Partial Likelihood.” Biometrika, 62: 269–275.
  • Craig, P. S., Goldstein, M., Rougier, J. C., and Seheult, A. H. (2001). “Bayesian forecasting for complex systems using computer simulators.” Journal of the American Statistical Association, 96(454): 717–729.
  • Currin, C., Mitchell, T., Morris, M., and Ylvisaker, D. (1991). “Bayesian prediction of deterministic functions, with applications to the design and analysis of computer experiments.” Journal of the American Statistical Association, 86: 953–963.
  • Diggle, P. (2006). “Spatio-temporal point processes, partial likelihood, foot and mouth disease.” Statistical methods in medical research, 15: 325–336.
  • Evans, M. and Moshonov, H. (2006). “Checking for Prior-Data Conflict.” Bayesian Analysis, 1(4): 893–914.
  • Gelfand, A. E. and Smith, A. F. M. (1990). “Sampling based approaches to calculating marginal densities.” Journal of the American Statistics Association, 85: 398–409.
  • Gelman, A. and Raghunathan, T. E. (2001). “Conditionally Specified Distributions: An Introduction: Comment.” Statistical Science, 16: 268–269.
  • Gramacy, R. and Lee, H. (2008). “Bayesian Treed Gaussian Process Models with an Application to Computer Modeling.” Journal of the American Statistical Association, 103: 1119–1130.
  • Gustafson, P. (2005). “On model expansion, model contraction, identifiability and prior information: two illustrative scenarios involving mismeasured variables.” Statistical Science, 20: 111–140.
  • Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. M. (2000”). “Dependency Networks for Inference, Collaborative Filtering, and Data Visualization.” Journal of Machine Learning Research, 1: 49–75.
  • Higdon, D., Gattiker, J., Williams, B., and M., R. (2007). “Computer model validation using high dimensional outputs.” In Bernardo, J., Bayarri, M. J., Dawid, A. P., Berger, J. O., Heckerman, D., Smith, A. F. M., and West, M. (eds.), Bayesian Statistics 8. London: Oxford University Press. (in press).
  • Higdon, D., Kennedy, M. C., Cavendish, J., Cafeo, J., and Ryne, R. D. (2004). “Combining field data and computer simulations for calibration and prediction.” SIAM Journal on Scientific Computing, 26: 448–466.
  • Joseph, V. R. (2006). “Limit Kriging.” Technometrics, 48(4): 458–466.
  • Kennedy, M. C. and O’Hagan, A. (2001). “Bayesian calibration of computer models (with discussion).” Journal of the Royal Statistical Society B, 63: 425–464.
  • Liu, F. (2007). “Bayesian Functional Data Analysis for Computer Model Validation.” Ph.D. thesis, Department of Statisitcal Science, Duke University, Durham, NC, USA.
  • Liu, F., Bayarri, M. J., Berger, J. O., Paulo, R., and Sacks, J. (2008). “A Bayesian Analysis of the Thermal Challenge Problem.” Computer Methods in Applied Mechanics and Engineering (CMAME), 197: 2457–2466.
  • Møller, J. and Sorensen, M. (1994). “Statistical analysis of a spatial birth and death process model with a view to modelling linear dune fields.” Scandinavian Journal of Statistics, 21: 1–19.
  • Morris, M. D., Mitchell, T. J., and Ylvisaker, D. (1993). “Bayesian design and analysis of computer experiments: Use of derivatives in surface prediction.” Technometrics, 35: 243–255.
  • Newton, M. and Raftery, A. (1994). “Approximate Bayesian inference by the weighted likelihood bootstrap (with Discussion).” Journal of the Royal Statistical Society, series B, 56: 3–48.
  • Qian, P. Z. and Wu, J. C. (2008). “Bayesian Hierarchical Modeling for Integrating Low-Accuracy and High-Accuracy Experiments.” Technometrics, 50: 192–204.
  • Raghunathan, T. E., Lepkowski, J. M., Van Hoewyk, J., and Solenberger, P. (2001). “A Multivariate Technique for Multiply Imputing Missing Values Using a Sequence of Regression Models.” Survey Methodology, 27: 85–95.
  • Reichert, P., White, G., Bayarri, M., Pitman, E., and Santer, T. (2008). “Mechanism-based Emulation of Dynamic Simulators: Concept and Application in Hydrology.” Technical report, Statistical and Applied Mathematical Sciences Institute, Research Triagle Park, NC, USA.
  • Robert, C. P. and Casella, G. (2002). Monte Carlo Statistical Methods. Springer.
  • Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). “Design and analysis of computer experiments.” Statistical Science, 4: 409–423.
  • Santner, T., Williams, B., and Notz, W. (2003). The Design and Analysis of Computer Experiments. Springer-Verlag.
  • Spiegelhalter, D., Thomas, A., Best, N., and Lunn, D. (2003). WinBUGS User Manual.
  • Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J., and Morris, M. D. (1992). “Screening, predicting, and computer experiments.” Technometrics, 34: 15–25.