Expected-posterior priors (EPPs) have been proved to be extremely useful for testing hypotheses on the regression coefficients of normal linear models. One of the advantages of using EPPs is that impropriety of baseline priors causes no indeterminacy in the computation of Bayes factors. However, in regression problems, they are based on one or more training samples, that could influence the resulting posterior distribution. On the other hand, the power-expected-posterior priors are minimally-informative priors that reduce the effect of training samples on the EPP approach, by combining ideas from the power-prior and unit-information-prior methodologies. In this paper, we prove the consistency of the Bayes factors when using the power-expected-posterior priors, with the independence Jeffreys as a baseline prior, for normal linear models, under very mild conditions on the design matrix.
"Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models." Braz. J. Probab. Stat. 30 (2) 299 - 320, May 2016. https://doi.org/10.1214/15-BJPS281