Bayesian Analysis

Context-Dependent Score Based Bayesian Information Criteria

N. T. Underhill and J. Q. Smith

Full-text: Open access

Abstract

In a number of applications, we argue that standard Bayes factor model comparison and selection may be inappropriate for decision making under specific, utility-based, criteria. It has been suggested that the use of scoring rules in this context allows greater flexibility: scores can be customised to a client’s utility and model selection can proceed on the basis of the highest scoring model. We argue here that the approach of comparing the cumulative scores of competing models is not ideal because it tends to ignore a model’s ability to ‘catch up’ through parameter learning. An alternative approach of selecting a model on its maximum posterior score based on a plug in or posterior expected value is problematic in that it uses the data twice in estimation and evaluation. We therefore introduce a new Bayesian posterior score information criterion (BPSIC), which is a generalisation of the Bayesian predictive information criterion proposed by Ando (2007). This allows the analyst both to tailor an appropriate scoring function to the needs of the ultimate decision maker and to correct appropriately for bias in using the data on a posterior basis to revise parameter estimates. We show that this criterion can provide a convenient method of initial model comparison when the number of models under consideration is large or when computational burdens are high. We illustrate the new methods with simulated examples and real data from the UK electricity imbalance market.

Article information

Source
Bayesian Anal., Volume 11, Number 4 (2016), 1005-1033.

Dates
First available in Project Euclid: 29 October 2015

Permanent link to this document
https://projecteuclid.org/euclid.ba/1446124066

Digital Object Identifier
doi:10.1214/15-BA980

Mathematical Reviews number (MathSciNet)
MR3545472

Zentralblatt MATH identifier
1357.62135

Keywords
scoring rules Bayesian model selection information criteria utility based model selection

Citation

Underhill, N. T.; Smith, J. Q. Context-Dependent Score Based Bayesian Information Criteria. Bayesian Anal. 11 (2016), no. 4, 1005--1033. doi:10.1214/15-BA980. https://projecteuclid.org/euclid.ba/1446124066


Export citation

References

  • Aitkin, M. (1991). “Posterior Bayes factors.” Journal of the Royal Statistical Society Series B, 53(1): 111–142.
  • Akaike, H. (1973). “Information theory and an extension of the maximum likelihood principle.” In: Petrov, B. and Csaki, F. (eds.), Second International Symposium on Information Theory, 267–281.
  • Ando, T. (2007). “Bayesian predictive information criterion for the evaluation of hierarchical Bayesian and empirical Bayes models.” Biometrika, 94(2): 443–458.
  • Aravkin, A., Kambadur, A., Lozano, A., and Luss, R. (2014). “Sparse quantile Huber regression for efficient and robust estimation.” arXiv:1402.4624v1.
  • Arlot, S. and Celisse, A. (2010). “A survey of cross-validation procedures for model selection.” Statistics Surveys, 4: 40–79.
  • Azzalini, A. (1986). “A class of distributions which includes the normal ones.” Scandinavian Journal of Statistics, 12: 171–178.
  • Barndorff-Nielsen, O. and Cox, D. (1989). Asymptotic Techniques for use in Statistics. London: Chapman and Hall.
  • Berk, R. (1966). “Limiting behavior of posterior distributions when the model is incorrect.” Annals of Mathematical Statistics, 37(1): 51–58.
  • Bernardo, J. (1979). “Expected information as expected utility.” The Annals of Statistics, 7: 686–690.
  • Bernardo, J. and Smith, A. (1994). Bayesian Theory. New York: Wiley.
  • Bissiri, P., Holmes, C., and Walker, S. (2013). “A General Framework for Updating Belief Distributions.” arXiv:1306.6430, 1–50.
  • Celeux, G., Forbes, F., Robert, C., and Titterington, D. (2006). “Rejoinder to ‘Deviance information criteria for missing data models’.” Bayesian Analysis, 70.
  • Claeskens, G. and Hjort, N. (2003). “The focused information criterion (with discussion).” Journal of the American Statistical Association, 98: 879–899.
  • Dawid, A. (1984). “Statistical theory – The prequential approach.” Journal of the Royal Statistical Society Series A, 147: 278–292.
  • Dawid, A. (2007). “The geometry of proper scoring rules.” Annals of the Institute of Statistical Mathematics, 59: 77–93.
  • Fruhwirth-Schnatter, S. and Pyne, S. (2010). “Bayesian inference for finite mixtures of univariate and multivariate skew-normal and skew-t distributions.” Biostatistics, 11(2): 317–336.
  • Gelfand, A. and Dey, D. (1994). “Bayesian model choice: Asymptotics and exact calculations.” Journal of the Royal Statistical Society Series B, 56(3): 501–514.
  • Gharamani, Z. (2004). Advanced Lectures on Machine Learning. Springer.
  • Gneiting, T. (2011). “Making and evaluating point forecasts.” Journal of the American Statistical Association, 106(494): 746–762.
  • Gneiting, T. and Raftery, A. (2007). “Strictly proper scoring rules, prediction and estimation.” Journal of the American Statistical Association, 102(477): 359–378.
  • Kadane, J. and Dickey, J. (1980). Bayesian Decision Theory and the Simplification of Models, 245–268. New York: Academic Press.
  • Kass, R. and Raftery, A. (1995). “Bayes factors.” Journal of the American Statistical Association, 90(430): 773–795.
  • Key, J., Pericchi, L., and Smith, A. (1999). “Bayesian model choice: What and why?” In: Bernardo, J. (ed.), Bayesian Statistics 6, 343–370. Oxford University Press.
  • Linhart, H. and Zucchini, W. (1986). Model Selection. John Wiley and Sons.
  • Musio, M. and Dawid, P. (2013). “Model selection with proper scoring rules.” In: Cambridge Statistics Initiative One-Day Meeting.
  • Phillips, L. (1982). “Requisite decision modelling: A case study.” The Journal of the Operational Research Society, 33: 303–311.
  • Schwarz, G. (1978). “Estimating the dimension of a model.” Annals of Statistics, 6(2): 461–464.
  • Spiegelhalter, D., Best, N., Carlin, B., and Van Der Linde, A. (2002). “Bayesian measures of complexity and fit.” Journal of the Royal Statistical Society Series B, 64(4): 583–639.
  • Stone, M. (1974). “Cross-validatory choice and assessment of statistical predictions.” Journal of the Royal Statistical Society Series B, 36(2): 111–147.
  • Stone, M. (1977). “An asymptotic equivalance of choice of model by cross-validation and Akaike’s criterion.” Journal of the Royal Statistical Society Series B, 39(1): 44–47.
  • van Erven, T., Grunwald, P., and de Rooij, S. (2012). “Catching up faster by switching sooner: A predictive approach to adaptive estimation with an application to the AIC-BIC dilemma.” Journal of the Royal Statistical Society Series B, 74(2): 1–37.
  • Vehtari, A. (2001). “Bayesian model assessment and selection using expected utilities.” Ph.D. thesis, Helsinki University of Technology.
  • Vehtari, A. and Ojanen, J. (2012). “A survey of Bayesian predictive methods for model assessment, selection and comparison.” Statistics Surveys, 6: 142–228.
  • Winkler, R., Munoz, J., Bernardo, J., Blattenberger, G., and Kadane, J. (1996). “Scoring rules and the evaluation of probabilities.” Test, 5(1): 1–60.
  • Xu, X., Lu, P., MacEachern, S., and Xu, R. (2011). “Calibrated Bayes factors for model comparison.” Technical Report 855, Department of Statistics, Ohio State University
  • Zhou, S. (2011). “Bayesian model selection in terms of Kullback-Leibler discrepancy.” Ph.D. thesis, Columbia University.