## The Annals of Mathematical Statistics

### Sequential Discrimination with Likelihood Ratios

Charles Yarbrough

#### Abstract

Let $X_1, X_2, \cdots$ be independent identically distributed random variables. You observe the $X$'s sequentially, knowing that their distribution is one of countably many different probabilities. Within an arbitrary error level, can you decide which one? This is the general problem of sequential discrimination. Freedman  showed that the discriminability of a family $\Theta$ is equivalent to a seemingly weaker condition. Namely, for any error level $\alpha$ and any particular $\theta \in \Theta$ there is a uniformly powerful fixed sample size test of $\{\theta\}$ versus $\Theta-\{\theta\}$ with error level uniformly as small as $\alpha$. The proof is constructive. Given the fixed sample size tests there is a recipe for manufacturing a sequential procedure to decide among all the members of $\Theta$. The fixed sample size tests are, however, still required. Hoeffding and Wolfowitz  considered this problem at length. LeCam and Schwartz  also touched upon it briefly. Both papers considered separations in various topologies and structures. Here we return to the original problem and ask whether likelihood ratios can be sensibly used. A rule is easy to specify0. For each $\theta \in \Theta$ you pick a number greater than one. Now watch $X_1, X_2, \cdots$. At each step compute the likelihood ratio for every pair of probabilities. Eventually it may happen that for some $\theta$ all the ratios with $\theta$ in the numerator are as big as the pre-assigned number. If so, stop and declare $\theta$ to be the true distribution. This rule is the extension to the countable case of the general sequential probability ratio test proposed by Barnard  and detailed by Armitage . It does require the computation of all the likelihood ratios; but since $\Theta$ is countable, there is always at least one base for calculating densities. Any one will do. Likelihood ratio procedures have the advantage of being easy to formulate. Also, the comparison of densities seems to be a reasonably natural technique. However, it does not always work. An example will illustrate that it may fail spectacularly. When do likelihood ratio procedures work? The principal result is a characterization of families which are likelihood ratio discriminable. Check each probability $\theta$ separately. There may be a number $K(\theta)$ bigger than one which will be eventually exceeded simultaneously by all the ratios with $\theta$ in the numerator. If not, likelihood ratios will not work. If so, then the values $K(\theta)$ may be chosen so as to limit the error to any desired level. Despite their failures, likelihood ratio procedures may work when other natural conditions fail. Freedman showed that if each $\theta \in \Theta$ is isolated in the topology of setwise convergence, then $\Theta$ is disciminable. The converse is false. There is a family which is likelihood ratio discriminable, but which has one element in the setwise closure of all the others. This is a direct consequence of a recent theorem by LeCam . Finally, with many familiar families likelihood ratio procedures have finite expected stopping time. Cases vary, however, and there is a discriminable family which has infinite expected stopping time for sampling under one of its elements.

#### Article information

Source
Ann. Math. Statist., Volume 42, Number 4 (1971), 1339-1347.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177693246

Digital Object Identifier
doi:10.1214/aoms/1177693246

Mathematical Reviews number (MathSciNet)
MR297064

Zentralblatt MATH identifier
0226.62061

JSTOR