Abstract
We study the stability of posterior predictive inferences to the specification of the likelihood model and perturbations of the data generating process. In modern big data analyses, useful broad structural judgements may be elicited from the decision-maker but a level of interpolation is required to arrive at a likelihood model. As a result, an often computationally convenient canonical form is used in place of the decision-maker’s true beliefs. Equally, in practice, observational datasets often contain unforeseen heterogeneities and recording errors and therefore do not necessarily correspond to how the data generating process was idealised by the decision-maker. Acknowledging such imprecisions, a faithful Bayesian analysis should ideally be stable across reasonable equivalence classes of such inputs. We are able to guarantee that traditional Bayesian updating provides stability across only a very strict class of likelihood models and data generating processes, requiring the decision-maker to elicit their beliefs and understand how the data was generated with an unreasonable degree of accuracy. On the other hand, a generalised Bayesian alternative using the β-divergence loss function is shown to be stable across practical and interpretable neighbourhoods, providing assurances that posterior inferences are not overly dependent on accidentally introduced spurious specifications or data collection errors. We illustrate this in linear regression, binary classification, and mixture modelling examples, showing that stable updating does not compromise the ability to learn about the data generating process. These stability results provide a compelling justification for using generalised Bayes to facilitate inference under simplified canonical models.
Funding Statement
JJ was partially funded by the Ayudas Fundación BBVA a Equipos de Investigación Cientifica 2017, the Government of Spain’s Plan Nacional PGC2018-101643-B-I00, and a Juan de la Cierva Formación fellowship FJC2020-046348-I. CH was supported by the EPSRC Bayes4Health programme grant and both CH and JQS were supported by the Alan Turing Institute, UK.
Acknowledgments
The authors would like to thank Danny Williamson, Christian Robert, and Sebastian Vollmer for their insightful discussions on the topics in this paper. We would also like to thank two anonymous reviewers, the Associated Editor and the Editor for their help in improving this paper.
Citation
Jack Jewson. Jim Q. Smith. Chris Holmes. "On the Stability of General Bayesian Inference." Bayesian Anal. Advance Publication 1 - 31, 2024. https://doi.org/10.1214/24-BA1502
Information