Abstract
Suppose one is able to observe sequentially a series of independent observations $X_1, X_2, \ldots$, such that $X_1, X_2, \ldots, X_{\nu - 1}$ are i.i.d. with known density $f_0$ and $X_\nu, X_{\nu + 1}, \ldots$ are i.i.d. with density $f_\theta$ where $\nu$ is unknown. Define $R(n, \theta) = \sum^n_{k = 1} \prod^n_{i = k} \frac{f_\theta(X_i)}{f_0(X_i)}.$ It is known that rules, which call for stopping and raising an alarm the first time $n$ that $R(n, \theta)$ or a mixture thereof exceeds a prespecified level $A$, are optimal methods of detecting that the density of the observations is not $f_0$ any more. Practical applications of such stopping rules require knowledge of their operating characteristics, whose exact evaluation is difficult. Here are presented asymptotic $(A \rightarrow \infty)$ expressions for the expected stopping times of such stopping rules (a) when $\nu = \infty$ and (b) when $\nu = 1$. We assume that the densities $f_\theta$ form an exponential family and that the distribution of $\log(f_\nu(X_1)/f_0(X_1))$ is (strongly) nonlattice. Monte Carlo studies indicate that the asymptotic expressions are very good approximations, even when the expected sample sizes are small.
Citation
Moshe Pollak. "Average Run Lengths of an Optimal Method of Detecting a Change in Distribution." Ann. Statist. 15 (2) 749 - 779, June, 1987. https://doi.org/10.1214/aos/1176350373
Information