Abstract
In models with conditionally independent observations, it is shown that the posterior variance of the log-likelihood from observation $i$ is a measure of that observation's local influence. This result is obtained by considering the Kullback-Leibler divergence between baseline and case-weight perturbed posteriors, with local influence being the curvature of this divergence evaluated at the baseline posterior. Case-weighting is formulated using quasi-likelihood and hence for binomial or Poisson observations, the posterior variance of an observation's log-likelihood provides a measure of sensitivity to mild mis-specification of its dispersion. In general, the case-weighted posteriors are quasi-posteriors because they do not arise from a formal sampling model. Their propriety is established under a simple sufficient condition. A second local measure of posterior change, the curvature of the Kullback-Leibler divergence between predictive densities, is seen to be the posterior variance (over future observations) of the expected log-likelihood, and can easily be estimated using importance sampling. Suggestions for identifying locally influential observations are given. The methodology is applied to a well known simple linear model dataset, to a nonlinear state-space model, and to a random-effects binary response model.
Citation
Russell B. Millar. Wayne S. Stewart. "Assessment of locally influential observations in Bayesian models." Bayesian Anal. 2 (2) 365 - 383, June 2007. https://doi.org/10.1214/07-BA216
Information