## Bayesian Analysis

### Identifying outliers in Bayesian hierarchical models: a simulation-based approach

#### Abstract

A variety of simulation-based techniques have been proposed for detection of divergent behaviour at each level of a hierarchical model. We investigate a diagnostic test based on measuring the conflict between two independent sources of evidence regarding a parameter: that arising from its predictive prior given the remainder of the data, and that arising from its likelihood. This test gives rise to a $p$-value that exactly matches or closely approximates a cross-validatory predictive comparison, and yet is more widely applicable. Its properties are explored for normal hierarchical models and in an application in which divergent surgical mortality was suspected. Since full cross-validation is so computationally demanding, we examine full-data approximations which are shown to have only moderate conservatism in normal models. A second example concerns criticism of a complex growth curve model at both observation and parameter levels, and illustrates the issue of dealing with multiple $p$-values within a Bayesian framework. We conclude with the proposal of an overall strategy to detecting divergent behaviour in hierarchical models.

#### Article information

Source
Bayesian Anal., Volume 2, Number 2 (2007), 409-444.

Dates
First available in Project Euclid: 22 June 2012

https://projecteuclid.org/euclid.ba/1340393242

Digital Object Identifier
doi:10.1214/07-BA218

Mathematical Reviews number (MathSciNet)
MR2312289

Zentralblatt MATH identifier
1331.62032

#### Citation

Marshall, E. C.; Spiegelhalter, D. J. Identifying outliers in Bayesian hierarchical models: a simulation-based approach. Bayesian Anal. 2 (2007), no. 2, 409--444. doi:10.1214/07-BA218. https://projecteuclid.org/euclid.ba/1340393242