Bayesian Analysis

Neutral-data comparisons for Bayesian testing

Dan J. Spitzner

Full-text: Open access


A novel approach to evidence assessment in Bayesian hypothesis testing is proposed, in the form of a "neutral-data comparison." The proposed assessment is similar to a Bayes factor, but, rather than comparing posterior to prior odds, it compares the posterior odds of the observed data to that calculated on "neutral" data, which arise as part of the elicitation of prior knowledge. The article develops a general theory of neutral-data comparisons, motivated largely by the Jeffreys-Lindley paradox, and develops methodology for specifying and working with neutral data in the context of Gaussian linear-models analysis. The proposed methodology is shown to exhibit exceptionally strong asymptotic-consistency properties in high dimensions, and, in an application example, to accommodate challenging analysis objectives using basic computational algorithms.

Article information

Bayesian Anal., Volume 6, Number 4 (2011), 603-638.

First available in Project Euclid: 13 June 2012

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Bayesian hypothesis testing Bayes factors Bayesian asymptotic-con-sistency model choice in high dimensions analysis of variance


Spitzner, Dan J. Neutral-data comparisons for Bayesian testing. Bayesian Anal. 6 (2011), no. 4, 603--638. doi:10.1214/11-BA623.

Export citation


  • Aitkin, M. (1991). "Posterior Bayes factors." Journal of the Royal Statistical Society - Series B, 53: 111–142.
  • Bellman, R. (1960). Introduction to matrix analysis. New York: McGraw-Hill.
  • Berger, J. O., Bernardo, J. M., and Sun, D. (2009). "The formal definition of reference priors." Annals of Statistics, 37: 905–938.
  • Berger, J. O., Ghosh, J. K., and Mukhopadhyay, N. (2003). "Approximations and consistency of Bayes factors as model dimension grows." Journal of Statistical Planning and Inference, 112: 241–258.
  • Berger, J. O. and Pericchi, L. (1996). "The intrinsic Bayes factor for model selection and prediction." Journal of the American Statistical Association, 91: 109–122.
  • Billingsley, P. (1995). Probability and measure. New York: Wiley, 3rd edition.
  • Casella, G., Girón, F. J., Martinez, M. L., and Moreno, E. (2009). "Consistency of Bayesian procedures for variable selection." Annals of Statistics, 37: 1207–1228.
  • Crowley, E. M. (1997). "Product partition models for normal means." Journal of the American Statistical Association, 92: 192–198.
  • Diaconis, P. and Freedman, D. (1986). "On the consistency of Bayes estimates." Annals of Statistics, 14: 1–26.
  • Fan, J. and Lv, J. (2008). "Sure independence screening for ultrahigh dimensional feature space." Journal of the Royal Statistical Society - Series B, 70: 849–911.
  • –- (2010). "A selective overview of variable selection in high dimensional feature space." Statistica Sinica, 20: 101–148.
  • García-Donato, G. and Sun, D. (2007). "Objective priors for model selection in one-way random effects models." The Canadian Journal of Statistics, 35: 303–320.
  • Gelman, A. (2005). "Analysis of variance–why it is more important than ever (with discussion)." Annals of Statistics, 33: 1–53.
  • Good, I. J. (1950). Probability and the Weighing of Evidence. London: Griffin.
  • Guo, R. and Speckman, P. (2009). "Bayes factor consistency in linear models." In The 2009 International Workshop on Objective Bayes Methodology in Philadelphia, PA, June 5-9, 2009. OBayes09/AbstractPapers/speckman.pdf.
  • Hoaglin, D. C., Mosteller, F., and Tukey, J. W. (1991). Fundamentals of Exploratory Analysis of Variance. Wiley: New York.
  • Jeffreys, H. (1961). Theory of Probability. Oxford: Oxford University Press, 3rd edition.
  • Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions. Wiley: New York, 2nd edition.
  • Johnson, V. E. and Rossell, D. (2010). "On the use of non-local prior densities for default Bayesian hypothesis tests." Journal of the Royal Statistical Society - Series B, 72: 143–170.
  • Kass, R. E. and Raftery, A. E. (1995). "Bayes factors." Journal of the American Statistical Association, 90: 773–795.
  • Lavine, M. and Schervish, M. J. (1999). "Bayes factors: what they are and what they are not." The American Statistician, 53: 119–112.
  • Liang, F., Paulo, R., Molina, G., Clyde, C. A., and Berger, J. O. (2008). "Mixtures of $g$ priors for Bayesian variable selection." Journal of the American Statistical Association, 103: 410–423.
  • Lindley, D. V. (1957). "A statistical paradox." Biometrika, 44: 187–192.
  • Maruyama, Y. and George, E. I. (2010). "gBF: A Fully Bayes Factor with a Generalized g-prior." arXiv:0801.4410v2.
  • O'Hagan, A. (1995). "Fractional Bayes factors for model comparisons." Journal of the Royal Statistical Society - Series B, 57: 99–138.
  • Pérez, J. M. and Berger, J. O. (2002). "Expected-posterior prior distributions for model selection." Biometrika, 89: 491–511.
  • Robert, C. P. (1993). "A note on Jeffreys-Lindley paradox." Statistica Sinica, 3: 603–608.
  • Robert, C. P. and Casella, G. (1999). Monte Carlo Statistical Methods. Springer: New York.
  • Schwarz, G. (1978). "Estimating the dimension of a model." Annals of Statistics, 6: 461–464.
  • Scott, J. G. and Berger, J. O. (2006). "An exploration of aspects of Bayesian multiple testing." Journal of Statistical Planning and Inference, 136: 2144–2162.
  • Smith, A. F. M. and Spiegelhalter, D. J. (1980). "Bayes factors and choice criteria for linear models." Journal of the Royal Statistical Society - Series B, 42: 213–220.
  • Spiegelhalter, D. J. and Smith, A. F. M. (1982). "Bayes factors for linear and log-linear models with vague prior information." Journal of the Royal Statistical Society - Series B, 44: 377–387.
  • Spitzner, D. J. (2008). "An asymptotic viewpoint on high-dimensional Bayesian testing." Bayesian Analysis, 3: 121–160.