Combining distributions is an important issue in decision theory and Bayesian inference. Logarithmic pooling is a popular method to aggregate expert opinions by using a set of weights that reflect the reliability of each information source. However, the resulting pooled distribution depends heavily on set of weights given to each opinion/prior and thus careful consideration must be given to the choice of weights. In this paper we review and extend the statistical theory of logarithmic pooling, focusing on the assignment of the weights using a hierarchical prior distribution. We explore several statistical applications, such as the estimation of survival probabilities, meta-analysis and Bayesian melding of deterministic models of population growth and epidemics. We show that it is possible learn the weights from data, although identifiability issues may arise for some configurations of priors and data. Furthermore, we show how the hierarchical approach leads to posterior distributions that are able to accommodate prior-data conflict in complex models.
The authors would like to thank Professors Adrian Raftery, Christian Genest, Kevin Mcconway and Mike West, as well as Drs. David Poole, Eduardo Mendes and Felipe Figueiredo for helpful discussions. We would also like to thank the two anonymous referees, whose suggestions greatly improved the paper. DAMV and LSB were supported in part by CAPES under CAPES/Cofecub project (N. 833/15). FCC is grateful to Fundação Getúlio Vargas for funding during this project.
"Bayesian Inference for the Weights in Logarithmic Pooling." Bayesian Anal. 18 (1) 223 - 251, March 2023. https://doi.org/10.1214/22-BA1311