We elaborate on Watson and Holmes’ observation that misspecification is contextual: a model that is wrong can still be adequate in one prediction context, yet grossly inadequate in another. One can incorporate such phenomena by adopting a generalized posterior, in which the likelihood is multiplied by an exponentiated loss. We argue that Watson and Holmes’ characterization of such generalized posteriors does not really explain their good practical performance, and we provide an alternative explanation which suggests a further extension of the method.
"Contextuality of Misspecification and Data-Dependent Losses." Statist. Sci. 31 (4) 495 - 498, November 2016. https://doi.org/10.1214/16-STS561