Bayesian Analysis

Identifying outliers in Bayesian hierarchical models: a simulation-based approach

E. C. Marshall and D. J. Spiegelhalter

Full-text: Open access


A variety of simulation-based techniques have been proposed for detection of divergent behaviour at each level of a hierarchical model. We investigate a diagnostic test based on measuring the conflict between two independent sources of evidence regarding a parameter: that arising from its predictive prior given the remainder of the data, and that arising from its likelihood. This test gives rise to a $p$-value that exactly matches or closely approximates a cross-validatory predictive comparison, and yet is more widely applicable. Its properties are explored for normal hierarchical models and in an application in which divergent surgical mortality was suspected. Since full cross-validation is so computationally demanding, we examine full-data approximations which are shown to have only moderate conservatism in normal models. A second example concerns criticism of a complex growth curve model at both observation and parameter levels, and illustrates the issue of dealing with multiple $p$-values within a Bayesian framework. We conclude with the proposal of an overall strategy to detecting divergent behaviour in hierarchical models.

Article information

Bayesian Anal. Volume 2, Number 2 (2007), 409-444.

First available in Project Euclid: 22 June 2012

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Hierarchical models Diagnostics Outliers Distributional assumptions


Marshall, E. C.; Spiegelhalter, D. J. Identifying outliers in Bayesian hierarchical models: a simulation-based approach. Bayesian Anal. 2 (2007), no. 2, 409--444. doi:10.1214/07-BA218.

Export citation