Statistics Surveys

A survey of Bayesian predictive methods for model assessment, selection and comparison

Aki Vehtari and Janne Ojanen

Full-text: Open access

Abstract

To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data.

Article information

Source
Statist. Surv. Volume 6 (2012), 142-228.

Dates
First available in Project Euclid: 27 December 2012

Permanent link to this document
https://projecteuclid.org/euclid.ssu/1356628931

Digital Object Identifier
doi:10.1214/12-SS102

Mathematical Reviews number (MathSciNet)
MR3011074

Zentralblatt MATH identifier
1302.62011

Subjects
Primary: 62-02: Research exposition (monographs, survey articles)
Secondary: 62C10: Bayesian problems; characterization of Bayes procedures

Keywords
Bayesian predictive model assessment model selection decision theory expected utility cross-validation information criteria

Citation

Vehtari, Aki; Ojanen, Janne. A survey of Bayesian predictive methods for model assessment, selection and comparison. Statist. Surv. 6 (2012), 142--228. doi:10.1214/12-SS102. https://projecteuclid.org/euclid.ssu/1356628931.


Export citation

References

  • Aitkin, M. (1991). Posterior Bayes Factors (with discussion). Journal of the Royal Statistical Society. Series B (Methodological) 53 111–142.
  • Akaike, H. (1973). Information Theory and an Extension of the Maximum Likelihood Principle. In Second International Symposium on Information Theory ( B. N. Petrov and F. Csaki, eds.) 267–281. Academiai Kiado, Budapest. Reprinted in Kotz, S. and Johnson, N. L., editors, (1992). Breakthroughs in Statistics Volume I: Foundations and Basic Theory, pp. 610–624. Springer-Verlag.
  • Akaike, H. (1974). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control AC-19 716–723.
  • Akaike, H. (1979). A Bayesian Extension of the Minimum AIC Procedure of Autoregressive Model Fitting. Biometrika 66 237–242.
  • Ando, T. (2007). Bayesian predictive information criterion for the evaluation of hierarchical Bayesian and empirical Bayes models. Biometrika 94 443–458.
  • Ando, T. and Tsay, R. (2010). Predictive likelihood for Bayesian model selection and averaging. International Journal of Forecasting 26 744–763.
  • Arlot, S. and Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics surveys 4 40–79.
  • Barbieri, M. M. and Berger, J. O. (2004). Optimal Predictive Model Selection. The Annals of Statistics 32 870–897.
  • Bayarri, M. J. (1987). Comment to J. O. Berger and M. Delampady. Statistical Science 3 342–344.
  • Bayarri, M. J. (2003). Which ‘base’ distribution for model criticism? In Highly Structured Stochastic Systems ( P. J. Green, N. L. Hjort and S. Richardson, eds.) 445–453. Oxford University Press.
  • Bayarri, M. J. and Berger, J. O. (1999). Quantifying Surprise in the Data and Model Verification. In Bayesian Statistics 6 ( J. M. Bernardo, J. O. Berger and A. P. Dawid, eds.) 53–82. Oxford University Press.
  • Bayarri, M. J. and Berger, J. O. (2000). P Values for Composite Null Models. Journal of the American Statistical Association 95 1127–1142.
  • Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis, 2nd ed. Springer-Verlag.
  • Berger, J. O. and Bernardo, J. M. (1992). On the Development of Reference Priors. In Bayesian Statistics 4 ( J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 35–60. Oxford University Press.
  • Berger, J. and Pericchi, L. (1996). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association 91 109–122.
  • Bernardo, J. M. (1979). Expected Information as Expected Utility. Annals of Statistics 7 686–690.
  • Bernardo, J. M. (1999). Nested Hypothesis Testing: The Bayesian Reference Criterion. In Bayesian Statistics 6 ( J. M. Bernardo, J. O. Berger and A. P. Dawid, eds.) 101–130. Oxford University Press.
  • Bernardo, J. M. (2005a). Reference Analysis. In Handbook of Statistics, ( D. Dey and C. R. Rao, eds.) 25 Elsevier 17–90.
  • Bernardo, J. M. (2005b). Intrinsic credible regions: An objective Bayesian approach to interval estimation. Test 14. 317–384.
  • Bernardo, J. M. and Bayarri, M. J. (1985). Bayesian model criticism. In Model choice: proceedings of the 4th Franco-Belgian meeting of statisticians ( J. P. Florens, M. Mouchart, J. P. Raoult and L. Simar, eds.). Facultés universitaires Saint-Louis, Bruxelles.
  • Bernardo, J. M. and Bermúdez, J. D. (1985). The Choice of Variables in Probabilistic Classification. In Bayesian Statistics 2 ( J. M. Bernardo, M. H. deGroot, D. V. Lindley and A. F. M. Smith, eds.) 67–82. Elsevier Science Publishers.
  • Bernardo, J. M. and Juárez, M. A. (2003). Intrinsic Estimation. In Bayesian Statistics 7 ( J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds.) 456–476. Oxford University Press.
  • Bernardo, J. M. and Rueda, R. (2002). Bayesian hypothesis testing: a reference approach. International Statistical Review 70 351–372.
  • Bernardo, J. M. and Smith, A. F. M. (1994). Bayesian Theory. John Wiley & Sons.
  • Bhattacharya, S. and Haslett, J. (2007). Importance Re-sampling MCMC for Cross-Validation in Inverse Problems. Bayesian Analysis 2 385–408.
  • Birgé, L. and Massart, P. (2007). Minimal Penalties for Gaussian Model Selection. Probability Theory and Related Fields 138 33–73.
  • Bornn, L., Doucet, A. and Gottardo, R. (2010). An efficient computational approach for prior sensitivity analysis and cross-validation. The Canadian Journal of Statistics 38 47–64.
  • Box, G. E. P. (1980). Sampling and Bayes’ Inference in Scientific Modelling and Robustness. Journal of the Royal Statistical Society. Series A (General) 143 383–430.
  • Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees. Chapman and Hall.
  • Brown, P. J., Fearn, T. and Vannucci, M. (1999). The choice of variables in multivariate regression: a non-conjugate Bayesian decision theory approach. Biometrika 86 635–648.
  • Brown, P. J., Vannucci, M. and Fearn, T. (1998). Multivariate Bayesian variable selection and prediction. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 60 627–641.
  • Brown, P. J., Vannucci, M. and Fearn, T. (2002). Bayes model averaging with selection of regressors. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 64 519–536.
  • Burman, P. (1989). A Comparative Study of Ordinary Cross-Validation, $v$-Fold Cross-Validation and the Repeated Learning-Testing Methods. Biometrika 76 503–514.
  • Burman, P., Chow, E. and Nolan, D. (1994). A Cross-Validatory Method for Dependent Data. Biometrika 81 351–358.
  • Burman, P. and Nolan, D. (1992). Data dependent estimation of prediction functions. Journal of Time Series Analysis 13 189–207.
  • Burnham, K. P. and Anderson, D. R. (1998). Model selection and inference. Springer.
  • Burnham, K. P. and Anderson, D. R. (2002). Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach, 2nd ed. Springer.
  • Carlin, B. P. and Louis, T. A. (1996). Bayes and Empirical Bayes Methods for Data Analysis 69. Chapman & Hall.
  • Carlin, B. P. and Spiegelhalter, D. J. (2007). Discussion to ‘Estimating the Integrated Likelihood via Posterior Simulation Using the Harmonic Mean Identity’. In Bayesian Statistics 8 ( J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds.) 33–36. Oxford University Press.
  • Cawley, G. C. and Talbot, N. L. C. (2010). On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation. Journal of Machine Learning Research 11 2079–2107.
  • Celeux, G., Forbes, F., Robert, C. P. and Titterington, D. M. (2006). Deviance Information Criteria for Missing Data Models. Bayesian Analysis 1 651–674.
  • Chakrabarti, A. and Ghosh, J. K. (2007). Some Aspects of Bayesian Model Selection for Prediction. In Bayesian Statistics 8 ( J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds.) 51–90. Oxford University Press.
  • Chen, M.-H., Dey, D. K. and Ibrahim, J. G. (2004). Bayesian criterion based model assessment for categorical data. 91 45–63. http://biomet.oxfordjournals.org/content/91/1/45.abstract
  • Chen, M.-H., Shao, Q.-M. and Ibrahim, J. Q. (2000). Monte Carlo Methods in Bayesian Computation. Springer-Verlag.
  • Chow, G. C. (1981). A comparison of the information and posterior probability criteria for model selection. Journal of Econometrics 16 21–33.
  • Corander, J. and Marttinen, P. (2006). Bayesian Model Learning Based on Predictive Entropy. Journal of Logic, Language, and Information 15 5–20. http://www.jstor.org/stable/40180417
  • Dietterich, T. G. (1998). Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Computation 10 1895–1924.
  • Draper, D. and Fouskakis, D. (2000). A Case Study of Stochastic Optimization in Health Policy: Problem Formulation and Preliminary Results. Journal of Global Optimization 18 399–416.
  • Dupuis, J. A. and Robert, C. P. (1997). Bayesian Variable Selection in Qualitative Models by Kullback-Leibler Projections Working papers, Centre de Recherche en Economie et Statistique.
  • Dupuis, J. A. and Robert, C. P. (2003). Variable selection in qualitative models via an entropic explanatory power. Journal of Statistical Planning and Inference 111 77–94.
  • Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap 57. Chapman & Hall.
  • Epifani, I., MacEachern, S. N. and Peruggia, M. (2008). Case-deletion importance sampling estimators: Central limit theorems and related results. Electronic Journal of Statistics 2 774–806.
  • Fearn, T., Brown, P. J. and Besbeas, P. (2002). A Bayesian decision theory approach to variable selection for discrimination. Statistics and Computing 12 253–260.
  • Fouskakis, D. and Draper, D. (2008). Comparing stochastic optimization methods for variable selection in binary outcome prediction with application to health policy. Journal of the American Statistical Association 103 1367–1381.
  • Fouskakis, D., Ntzoufras, I. and Draper, D. (2009). Population-based reversible-jump Markov chain Monte Carlo for Bayesian variable selection and evaluation under cost limit restrictions. Journal of the Royal Statistical Society, Series C: Applied Statistics 58 383–403.
  • Friel, N. and Wyse, J. (2012). Estimating the evidence – a review. Statistica Neerlandica. Early view online. DOI: 10.1111/j.1467-9574.2011.00515.x.
  • Fushiki, T. (2011). Estimation of prediction error by using K-fold cross-validation. Statistics and Computing 21 137–146.
  • Geisser, S. (1975). The Predictive Sample Reuse Method with Applications. Journal of the American Statistical Association 70 320–328.
  • Geisser, S. and Eddy, W. F. (1979). A Predictive Approach to Model Selection. Journal of the American Statistical Association 74 153–160.
  • Gelfand, A. E. (1996). Model determination using sampling-based methods. In Markov Chain Monte Carlo in Practice ( W. R. Gilks, S. Richardson and D. J. Spiegelhalter, eds.) 145–162. Chapman & Hall.
  • Gelfand, A. E. (2003). Some comments on model criticism. In Highly Structured Stochastic Systems ( P. J. Green, N. L. Hjort and S. Richardson, eds.) 449–453. Oxford University Press.
  • Gelfand, A. E., Dey, D. K. and Chang, H. (1992). Model Determination using Predictive Distributions with Implementation via Sampling-Based Methods (with discussion). In Bayesian Statistics 4 ( J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 147–167. Oxford University Press.
  • Gelfand, A. E. and Dey, D. K. (1994). Bayesian Model Choice: Asymptotics and Exact Calculations. Journal of the Royal Statistical Society. Series B (Methodological) 56 501–514.
  • Gelfand, A. E. and Ghosh, S. K. (1998). Model Choice: A Minimum Posterior Predictive Loss Approach. Biometrika 85 1–11.
  • Gelman, A., Meng, X.-L. and Stern, H. (1996). Posterior Predictive Assessment of Model Fitness via Realized Discrepancies (with discussion). Statistica Sinica 6 733–807.
  • Gelman, A., Carlin, J. B., Stern, H. S. and Rubin, D. R. (1995). Bayesian Data Analysis. Chapman & Hall.
  • Gelman, A., Carlin, J. B., Stern, H. S. and Rubin, D. R. (2003). Bayesian Data Analysis, 2nd ed. Chapman & Hall.
  • George, E. I. and McCulloch, R. E. (1993). Variable Selection Via Gibbs Sampling. Journal of the American Statistical Association 88 881–889.
  • Geweke, J. (1989). Bayesian Inference in Econometric Models Using Monte Carlo Integration. Econometrica 57 1317–1339.
  • Gneiting, T. (2011). Making and Evaluating Point Forecasts. Journal of the American Statistical Association 106 746–762.
  • Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society Series B: Statistical Methodology 69 243–268.
  • Gneiting, T. and Raftery, A. E. (2007). Strictly Proper Scoring Rules, Prediction, and Estimation. Journal of American Statistical Association 102 359–378.
  • Good, I. J. (1952). Rational Decisions. Journal of the Royal Statistical Society. Series B (Methodological) 14 107–114.
  • Goutis, C. and Robert, C. P. (1998). Model choice in generalised linear models: A Bayesian approach via Kullback-Leibler projections. Biometrika 85 29–37.
  • Grünwald, P. D. and Dawid, A. P. (2004). Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory. Annals of Statistics 32 1367–1433.
  • Gutiérrez-Peña, E. (1992). Expected logarithmic divergence for exponential families. In Bayesian Statistics 4 ( J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 669–674. Oxford University Press.
  • Gutiérrez-Peña, E. (1997). A Bayesian Predictive Semiparametric Approach to Variable Selection and Model Comparison in Regression. In Bulletin of the International Statistical Institute, Tome LVII. (Proceedings of the 51st Session of the ISI, Invited Papers, Book 1.) 17–29.
  • Gutiérrez-Peña, E. and Walker, S. G. (2001). A Bayesian predictive approach to model selection. Journal of Statistical Planning and Inference 93 259–276.
  • Gutiérrez-Peña, E. and Walker, S. G. (2005). Statistical decision problems and Bayesian nonparametric methods. International Statistical Review 73 309–330.
  • Guttman, I. (1967). The Use of the Concept of a Future Observation in Goodness-of-Fit Problems. Journal of the Royal Statistical Society. Series B (Methodological) 29 83–100.
  • Han, C. and Carlin, B. P. (2000). MCMC methods for computing Bayes factors: A comparative review Research Report No. 2000-001, Division of Biostatistics, University of Minnesota.
  • Held, L., Schrödle, B. and Rue, H. (2010). Posterior and Cross-validatory Predictive Checks: A Comparison of MCMC and INLA. In Statistical Modelling and Regression Structures ( T. Kneib and G. Tutz, eds.) 91–110. Springer.
  • Hoeting, J., Madigan, D., Raftery, A. and Volinsky, C. (1999). Bayesian Model Averaging: A Tutorial. Statistical Science 14 382–401.
  • Hurvich, C. M. and Tsai, C.-L. (1989). Regression and Time Series Model Selection in Small Samples. Biometrika 76 297–307.
  • Hurvich, C. M. and Tsai, C.-L. (1991). Bias of the Corrected AIC Criterion for Underfitted Regression and time Series Models. Biometrika 78 499–509.
  • Ibrahim, J. G. and Chen, M.-H. (1997). Predictive Variable Selection for the Multivariate Linear Model. Biometrics 53 465–478. http://www.jstor.org/stable/2533950
  • Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Criterion-based methods for Bayesian model assessment. Statistica Sinica 11 419–443.
  • Ibrahim, J. G. and Laud, P. W. (1994). A Predictive Approach to the Analysis of Designed Experiments. Journal of the American Statistical Association 89 309–319.
  • Jaakkola, T. S. (2001). Tutorial on variational approximation methods. In Advanced Mean Field Methods ( M. Opper and D. Saad, eds.) 129–160. The MIT Press.
  • Jeffreys, H. (1961). Theory of Probability, 3rd ed. Oxford University Press (1st edition 1939).
  • Jonathan, P., Krzanowski, W. J. and McCarthy, W. V. (2000). On the use of cross-validation to assess performance in multivariate prediction. Statistics and Computing 10 209–229.
  • Jordan, M. I., Ghahramani, Z., Jaakkola, T. S. and Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning 37 183–233.
  • Jylänki, P., Vanhatalo, J. and Vehtari, A. (2011). Gaussian Process Regression with a Student-t Likelihood. Journal of Machine Learning Research 12 3227–3257.
  • Karabatsos, G. (2006). Bayesian nonparametric model selection and model testing. Journal of Mathematical Psychology 50.
  • Kass, R. E. and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association 90 773–795.
  • Key, J. T., Pericchi, L. R. and Smith, A. F. M. (1999). Bayesian Model Choice: What and Why? In Bayesian Statistics 6 ( J. M. Bernardo, J. O. Berger and A. P. Dawid, eds.) 343–370. Oxford University Press.
  • Kullback, S. and Leibler, R. A. (1951). On Information and Sufficiency. Annals of Mathematical Statistics 22 79–86.
  • Lacoste-Julien, S., Huszár, F. and Ghahramani, Z. (2011). Approximate inference for the loss-calibrated Bayesian. Journal of Machine Learning Research: Workshop and Conference Proceedings 15 416–424. AISTATS 2011 special issue.
  • Laud, P. and Ibrahim, J. (1995). Predictive model selection. Journal of the Royal Statistical Society. Series B (Methodological) 57 247–262.
  • Leamer, E. E. (1979). Information Criteria for Choice of Regression Models: A Comment. Econometrica 47 507–510.
  • Leung, D. H.-Y. (2005). Cross-validation in nonparametric regression with outliers. Annals of Statistics 33 2291–2310.
  • Lindley, D. V. (1968). The Choice of Variables in Multiple Regression. Journal of the Royal Statistical Society. Series B (Methodological) 30 31–66.
  • Lo, A. Y. (1987). A Large Sample Study of the Bayesian Bootstrap. Annals of Statistics 15 360–375.
  • MacKay, D. J. C. (1992). A Practical Bayesian Framework for Backpropagation Networks. Neural Computation 4 448–472.
  • Marin, J.-M. and Robert, C. P. (2010). Importance sampling methods for Bayesian discrimination between embedded models. In Frontiers of Statistical Decision Making and Bayesian Analysis ( M. H. Chen, D. K. Dey, P. Müller, D. Sun and K. Ye, eds.) 14, 513–553. Springer.
  • Marriott, J. M., Spencer, N. M. and Pettitt, A. N. (2001). A Bayesian Approach to Selecting Covariates for Prediction. Scandinavian Journal of Statistics 28 87–97.
  • Marshall, E. C. and Spiegelhalter, D. J. (2003). Approximate cross-validatory predictive checks in disease mapping models. Statistics in Medicine 22 1649–1660.
  • San Martini, A. and Spezzaferri, F. (1984). A Predictive Model Selection Criterion. Journal of the Royal Statistical Society. Series B (Methodological) 46 296–303.
  • Mason, D. M. and Newton, M. A. (1992). A Rank Statistics Approach to the Consistency of a General Bootstrap. Annals of Statistics 20 1611–1624.
  • McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models 37, Second ed. Chapman & Hall.
  • McCulloch, R. E. (1989). Local Model Influence. Journal of the American Statistical Association 84 473–478. http://www.jstor.org/stable/2289932
  • Meng, X.-L. (1994). Posterior Predictive $p$-Values. Annals of Statistics 22 1142–1160.
  • Meyer, M. C. and Laud, P. W. (2002). Predictive Variable Selection in Generalized Linear Models. Journal of the American Statistical Association 97 859–871.
  • Minka, T. (2001). A Family of Algorithms for Approximate Bayesian Inference PhD thesis, Massachusetts Institute of Technology.
  • Mitchell, T. J. and Beauchamp, J. J. (1988). Bayesian Variable Selection in Linear Regression (with discussion). Journal of the American Statistical Association 83.
  • Miyamoto, J. M. (1999). Quality-Adjusted Life Years (QALY) Utility Models under Expected Utility and Rank Dependent Utility Assumptions. Journal of Mathematical Psychology 43 201–237.
  • Moody, J. E. (1992). The Effective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems. In Advances in Neural Information Processing Systems 4 ( J. E. Moody, S. J. Hanson and R. P. Lippmann, eds.) 847–854. Morgan Kaufmann Publishers.
  • Murata, N., Yoshizawa, S. and Amari, S.-I. (1994). Network Information Criterion—Determining the number of hidden units for an Artificial Neural Network model. IEEE Transactions on Neural Networks 5 865–872.
  • Nadeau, C. and Bengio, S. (2000). Inference for the Generalization Error. In Advances in Neural Information Processing Systems 12 ( S. A. Solla, T. K. Leen and K.-R. Müller, eds.) 307–313. MIT Press.
  • Neal, R. M. (1998). Assessing Relevance Determination Methods Using DELVE. In Neural Networks and Machine Learning ( C. M. Bishop, ed.) 97–129. Springer-Verlag.
  • Newton, M. A. and Raftery, A. E. (1994). Approximate Bayesian Inference with the Weighted Likelihood Bootstrap (with discussion). Journal of the Royal Statistical Society. Series B (Methodological) 56 3–48.
  • Nickisch, H. and Rasmussen, C. E. (2008). Approximations for Binary Gaussian Process Classification. Journal of Machine Learning Research 9 2035–2078.
  • Nott, D. J. and Leng, C. (2010). Bayesian projection approaches to variable selection in generalized linear models. Computational Statistics & Data Analysis 54 3227–3241. http://dx.doi.org/10.1016/j.csda.2010.01.036
  • O’Hagan, A. (1995). Fractional Bayes Factors for Model Comparison (with discussion). Journal of the Royal Statistical Society. Series B (Methodological) 57 99–138.
  • O’Hagan, A. (2003). HSSS model criticism. In Highly Structured Stochastic Systems ( P. J. Green, N. L. Hjort and S. Richardson, eds.) 423–444. Oxford University Press.
  • O’Hagan, A. and Forster, J. (2004). Bayesian Inference, 2nd ed. Kendalls’s Advanced Theory of Statistics 2B. Arnold.
  • Opper, M. and Winther, O. (2000). Gaussian Processes for Classification: Mean-Field Algorithms. Neural Computation 12 2655–2684.
  • Orr, M. J. L. (1996). Introduction to Radial Basis Function Networks [online] Technical Report, Centre for Cognitive Science, University of Edinburgh. April 1996. Available at http://www.anc.ed.ac.uk/~mjo/papers/intro.ps.gz.
  • Peruggia, M. (1997). On the Variability of Case-Deletion Importance Sampling Weights in the Bayesian Linear Model. Journal of the American Statistical Association 92 199–207.
  • Plummer, M. (2008). Penalized loss functions for Bayesian model comparison. Biostatistics (Oxford, England) 9 523–39.
  • Raftery, A. E. and Zheng, Y. (2003). Discussion: Performance Of Bayesian Model Averaging. Journal of American Statistical Association 98 931–938.
  • Raftery, A. E., Newton, M. A., Satagopan, J. M. and Krivitsky, P. (2007). Estimating the Integrated Likelihood via Posterior Simulation Using the Harmonic Mean Identity (with discussion). In Bayesian Statistics 8 ( J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds.) 1–45. Oxford University Press.
  • Raiffa, H. and Schlaifer, R. (2000). Applied Statistical Decision Theory. John Wiley & Sons.
  • Rasmussen, C. E. and Ghahramani, Z. (2003). Bayesian Monte Carlo. In Advances in Neural Information Processing Systems 15 ( S. Becker, S. Thrun and K. Obermayer, eds.) 489–496. MIT Press, Cambridge, MA.
  • Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press.
  • Rasmussen, C. E., Neal, R. M., Hinton, G. E., van Camp, D., Revow, M., Ghahramani, Z., Kustra, R. and Tibshirani, R. (1996). The DELVE Manual [online]. Version 1.1. Available at ftp://ftp.cs.utoronto.ca/pub/neuron/delve/doc/manual.ps.gz.
  • Rencher, A. C. and Pun, F. C. (1980). Inflation of $R^{2}$ in Best Subset Regression. Technometrics 22 49–53.
  • Reunanen, J. (2003). Overfitting in Making Comparisons Between Variable Selection Methods. Journal of Machine Learning Research 3 1371–1382.
  • Richardson, S. (2002). Discussion to ‘Bayesian measures of model complexity and fit’ by Spiegelhalter et al. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 64 626–627.
  • Robert, C. P. (1996). Intrinsic losses. Theory and decision 40 191–214.
  • Robert, C. P. (2001). The Bayesian Choice: from Decision-Theoretic Motivations to Computational Implementation, 2nd ed. Springer.
  • Robert, C. P. and Wraith, D. (2009). Computational methods for Bayesian model choice. In The 29th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP Proceedings 1193 251–262.
  • Robins, J. M., van Der Vaart, A. and Ventura, V. (2000). Asymptotic Distribution of P Values in Composite Null Models. Journal of the American Statistical Association 95 1143–1156.
  • Rubin, D. B. (1981). The Bayesian Bootstrap. Annals of Statistics 9 130–134.
  • Rubin, D. B. (1984). Bayesianly Justifiable and Relevant Frequency Calculations for the Applied Statistician. Annals of Statistics 12 1151–1172.
  • Rueda, R. (1992). A Bayesian alternative to parametric hypothesis testing. Test 1 61–67. http://www.springerlink.com/content/37501636313583g2/
  • Sawa, T. (1978). Information Criteria for Discriminating Among Alternative Regression Models. Econometrica 46 1273–1291.
  • Shao, J. (1993). Linear Model Selection by Cross-Validation. Journal of the American Statistical Association 88 486–494.
  • Shen, X., Huang, H.-C. and Ye, J. (2004). Inference after Model Selection. Journal of the American Statistical Association 99 751–762. http://www.jstor.org/stable/27590445
  • Shibata, R. (1989). Statistical aspects of model selection. In From data to model ( J. C. Willems, ed.) 215–240. Springer-Verlag.
  • Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90 227–244.
  • Sinha, D., Chen, M.-H. and Ghosh, S. K. (1999). Bayesian Analysis and Model Selection for Interval-Censored Survival Data. Biometrics 55 585–590.
  • Skare, Ø., Bølviken, E. and Holden, L. (2003). Improved sampling-importance resampling and reduced bias importance sampling. Scandivanian Journal of Statistics 30 719–737.
  • Spiegelhalter, D. J., Best, N. G., Carlin, B. P. and van der Linde, A. (2002). Bayesian measures of model complexity and fit (with discussion). Journal of the Royal Statistical Society. Series B (Statistical Methodology) 64 583–639.
  • Stern, H. S. and Cressie, N. (2000). Posterior predictive model checks for disease mapping models. Statistics in Medicine 19 2377–2397.
  • Stone, M. (1974). Cross-Validatory Choice and Assessment of Statistical Predictions (with discussion). Journal of the Royal Statistical Society. Series B (Methodological) 36 111–147.
  • Stone, M. (1977). An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion. Journal of the Royal Statistical Society. Series B (Methodological) 39 44–47.
  • Sugiyama, M., and Müller, K.-R. (2005). Input-dependent estimation of generalization error under covariate shift. Statistics & Decisions 23 249–279.
  • Sugiyama, M., Krauledat, M. and Müller, K.-R. (2007). Covariate Shift Adaptation by Importance Weighted Cross Validation. Journal of Machine Learning Research 8 985–1005.
  • Sundararajan, S. and Keerthi, S. S. (2001). Predictive Approaches for Choosing Hyperparameters in Gaussian Processes. Neural Computation 13 1103–1118.
  • Takeuchi, K. (1976). Distribution of Informational Statistics and a Criterion of Model Fitting (in Japanese). Suri-Kagaku (Mathematic Sciences) 153 12–18.
  • Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 267–288.
  • Tibshirani, R. J. and Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Annals of Applied Statistics 3 822–829.
  • Tierney, L. and Kadane, J. B. (1986). Accurate Approximations for Posterior Moments and Marginal Densities. Journal of the American Statistical Association 81 82–86.
  • Tran, M.-N., Nott, D. J. and Leng, C. (2011). The predictive Lasso. Statistics and Computing 1–16.
  • Trottini, M. and Spezzaferri, F. (2002). A generalized predictive criterion for model selection. The Canadian Journal of Statistics 30 79–96.
  • Vanhatalo, J., Pietiläinen, V. and Vehtari, A. (2010). Approximate inference for disease mapping with sparse Gaussian processes. Statistics in Medicine 29 1580–1607.
  • Vannucci, M., Brown, P. J. and Fearn, T. (2003). A Decision theoretical approach to wavelet regression on curves with a high number of regressors. Journal of Statistical Planning and Inference 112 195–212.
  • Varma, S. and Simon, R. (2006). Bias in error estimation when using cross-validation for model selection. BMC Bioinformatics 7 91. http://www.ncbi.nlm.nih.gov/pubmed/16504092
  • Vehtari, A. (2002). Discussion of “Bayesian measures of model complexity and fit” by Spiegelhalter et al. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 64 620.
  • Vehtari, A. and Lampinen, J. (2002). Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities. Neural Computation 14 2439–2468.
  • Vehtari, A. and Lampinen, J. (2004). Model Selection via Predictive Explanatory Power Technical Report No. B38, Helsinki University of Technology, Laboratory of Computational Engineering.
  • Vlachos, P. K. and Gelfand, A. E. (2003). On the Calibration of Bayesian Model Choice Criteria. Journal of Statistical Planning and Inference 111 223–234.
  • Watanabe, S. (2009). Algebraic Geometry and Statistical Learning Theory. Cambridge University Press.
  • Watanabe, S. (2010a). Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory. Journal of Machine Learning Research 11 3571–3594.
  • Watanabe, S. (2010b). Equations of states in singular statistical estimation. Neural Networks 23 20–34.
  • Watanabe, S. (2010c). A limit theorem in singular regression problem. Advanced Studies of Pure Mathematics 57 473–492.
  • Weng, C.-S. (1989). On a Second-Order Asymptotic Property of the Bayesian Bootstrap Mean. Annals of Statistics 17 705–710.
  • Yang, Y. (2005). Can the Strengths of AIC and BIC Be Shared? A Conflict between Model Indentification and Regression Estimation. Biometrika 92 937–950.
  • Yang, Y. (2007). Consistency of Cross Validation for Comparing Regression Procedures. The Annals of Statistics 35 2450–2473.
  • Young, A. S. (1987a). On a Bayesian criterion for choosing predictive sub-models in linear regression. Metrika 34 325–339.
  • Young, A. S. (1987b). On the information criterion for selecting regressors. Metrika 34 185–194.
  • Zhu, L. and Carlin, B. P. (2000). Comparing hierarchical models for spatio-temporally misaligned data using the deviance information criterion. Statistics in Medicine 19 2265–2278.