## The Annals of Statistics

### Bayesian Inference Using Intervals of Measures

#### Abstract

Partial prior knowledge is quantified by an interval $I(L, U)$ of $\sigma$-finite prior measures $Q$ satisfying $L(A) \leq Q(A) \leq U(A)$ for all measurable sets $A$, and is interpreted as acceptance of a family of bets. The concept of conditional probability distributions is generalized to that of conditional measures, and Bayes theorem is extended to accommodate unbounded priors. According to Bayes theorem, the interval $I(L, U)$ of prior measures is transformed upon observing $X$ into a similar interval $I(L_x, U_x)$ of posterior measures. Upper and lower expectations and variances induced by such intervals of measures are obtained. Under weak regularity conditions, as the amount of data increases, these upper and lower posterior expectations are strongly consistent estimators. The range of posterior expectations of an arbitrary function $b$ on the parameter space is asymptotically $b_N \pm \alpha\sigma_N + o(\sigma_N)$ where $b_N$ and $\sigma^2_N$ are the posterior mean and variance of $b$ induced by the upper prior measure $U$, and where $\alpha$ is a constant determined by the density of $L$ with respect to $U$ reflecting the uncertainty about the prior.

#### Article information

Source
Ann. Statist. Volume 9, Number 2 (1981), 235-244.

Dates
First available in Project Euclid: 12 April 2007

https://projecteuclid.org/euclid.aos/1176345391

Digital Object Identifier
doi:10.1214/aos/1176345391

Mathematical Reviews number (MathSciNet)
MR606609

JSTOR