The Annals of Statistics
- Ann. Statist.
- Volume 9, Number 2 (1981), 235-244.
Bayesian Inference Using Intervals of Measures
Partial prior knowledge is quantified by an interval $I(L, U)$ of $\sigma$-finite prior measures $Q$ satisfying $L(A) \leq Q(A) \leq U(A)$ for all measurable sets $A$, and is interpreted as acceptance of a family of bets. The concept of conditional probability distributions is generalized to that of conditional measures, and Bayes theorem is extended to accommodate unbounded priors. According to Bayes theorem, the interval $I(L, U)$ of prior measures is transformed upon observing $X$ into a similar interval $I(L_x, U_x)$ of posterior measures. Upper and lower expectations and variances induced by such intervals of measures are obtained. Under weak regularity conditions, as the amount of data increases, these upper and lower posterior expectations are strongly consistent estimators. The range of posterior expectations of an arbitrary function $b$ on the parameter space is asymptotically $b_N \pm \alpha\sigma_N + o(\sigma_N)$ where $b_N$ and $\sigma^2_N$ are the posterior mean and variance of $b$ induced by the upper prior measure $U$, and where $\alpha$ is a constant determined by the density of $L$ with respect to $U$ reflecting the uncertainty about the prior.
Ann. Statist. Volume 9, Number 2 (1981), 235-244.
First available in Project Euclid: 12 April 2007
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
DeRoberts, Lorraine; Hartigan, J. A. Bayesian Inference Using Intervals of Measures. Ann. Statist. 9 (1981), no. 2, 235--244. doi:10.1214/aos/1176345391. https://projecteuclid.org/euclid.aos/1176345391