Open Access
April, 1967 A Sequential Search Procedure
Milton C. Chew Jr.
Ann. Math. Statist. 38(2): 494-502 (April, 1967). DOI: 10.1214/aoms/1177698965

Abstract

Many optimization problems in mathematics and statistics are concerned with a quantity, or procedure, that yields some optimum value rather than the value itself. Frequently, they are referred to as searching problems. In this paper we consider a problem of this nature, which is described as follows. An object to be found is located in one of $R$ locations. A prior probability $p_i$ that the object is in $i$ is given $(\sum_ip_i = 1)$, along with an overlook probability $\alpha_i$ that if the object is in $i$, it is not found there on a given inspection of $i (0 < \alpha_i < 1, i = 1, \cdots, R)$. The $R$ locations--or boxes--can be searched one at a time, and it is assumed that all outcomes are independent, conditional on the location of the object and the inspection procedure used. Consequently, if the object is in $i$, the realization of the `first success'--the detection of the object--follows a geometric distribution with parameter $1 - \alpha_i$. A search procedure $\delta = (\delta_1, \delta_2, \cdots)$ is a sequence indicating the location to be inspected at each stage of the search, and one is interested in a $\delta$ that in some sense is optimal. In his notes on dynamic programming, Blackwell (see [3]) has shown that if, in addition, an inspection of $i$ costs $c_i$, the procedure minimizing the expected searching cost is a one-stage procedure, which instructs the searcher to inspect at each stage that box for which the ratio of the current detection probability, and the cost of an inspection there, is greatest. It is assumed here that $c_i = 1, i = 1, \cdots, R$. If $\delta^\ast = (\delta^\ast_1, \delta^\ast_2, \cdots)$ denotes an optimal one-stage procedure and $\delta^\ast_n = j$, then the detection probability for $j$ on the $n$th inspection is \begin{equation*}\tag(1.1) p_j\alpha_j^{m(j,n-1,\delta^\ast)} (1 - \alpha_j) = \max_i \{p_i\alpha^{m(i,n-1,\delta^\ast}_i (1 - \alpha_i)\}\end{equation*} where $m(i, n, \delta)$ is the number of inspections of $i$ among the first $n$ inspections made using $\delta$. In the next section $\delta^\ast$ is shown to be strongly optimal in the sense that \begin{equation*}\tag{1.2}P\lbrack N > n |\delta^\ast\rbrack \leqq P\lbrack N > n | \delta\rbrack,\end{equation*} for each $n = 0, 1, 2, \cdots$, and every search procedure $\delta$. (Here $N$ denotes the (random) number of inspections required to find the object.) The use of procedure $\delta^\ast$ thus ensures the greatest chance of finding the object within any fixed number of inspections, or, according to B. O. Koopman [2], it "optimally allocates the available search effort $\Phi$" for any number $\Phi$ of inspections. As a consequence of (1.2), $\delta^\ast$ also minimizes the expected `cost' of the search (since $E\lbrack N \mid \delta\rbrack = \sum^\infty_{n = 0} P\lbrack N > n \mid \delta\rbrack$), which is Blackwell's result for this case. Among the many interesting problems which arise in connection with this searching problem is the question of periodic features of the optimal procedure. Staroverov [5] and Matula [3] concerned themselves with this question, which was virtually answered by Matula who found necessary and sufficient conditions ensuring ultimate periodicity. We attempt to answer a different question in Section 3, where the same search problem is considered with the modification that $\sum_i p_i = 1 - q < 1$. That is, the object is in one of $R + 1$ locations, but searching is permitted only among the first $R$. (For instance, the $(R + 1)$st location could be the rest of the world.) In this problem every search procedure has positive probability of never terminating, making it necessary to couple a stopping rule $s$ (integer-valued) with any procedure $\delta$. A loss function is defined by imposing on the searcher a penalty cost $c(>0)$, payable when searching stops if the object has not been found. Thus, one either pays the cost of unsuccessful inspection plus the penalty cost, or simply the cost of inspection if the object is found prior to stopping. A procedure ($\delta, s$) is sought which minimizes the expected cost to the searcher, i.e. which yields the Bayes risk. Such a procedure exists and is shown to be $(\delta^\ast, s^\ast)$, where $\delta^\ast$ is a strongly optimal procedure satisfying (1.2). The determination of $S^\ast$ is the more difficult problem, however, and the main part of Section 3 is devoted to it. Instead of $s^\ast$ we consider the problem of finding the Bayes stopping region $S_B$, that set of posterior probabilities $p = (p_1, \cdots p_R)$ for which the Bayes risk equals the penalty cost $c$. Regions $S_L$ $S_U$ are given which bound $S_B$ in the sense that $S_L \subset S_B \subset S_U$. In Section 4, it is shown that for sufficiently large $C, S_L$ and $S_U$ `differ' by at most $R$ inspections taken according to $\delta^\ast$, and no more than one additional inspection is required in each of the $R$ boxes. If $(\delta^\ast, s_U)$ is a procedure which searches according to $\delta^\ast$, but stops once the posterior distribution $p \epsilon S_U$, a constant independent of $c$, is exhibited which bounds the difference between the Bayes risk and the expected risk using $(\delta^\ast, s_U)$.

Citation

Download Citation

Milton C. Chew Jr.. "A Sequential Search Procedure." Ann. Math. Statist. 38 (2) 494 - 502, April, 1967. https://doi.org/10.1214/aoms/1177698965

Information

Published: April, 1967
First available in Project Euclid: 27 April 2007

zbMATH: 0168.17106
MathSciNet: MR207444
Digital Object Identifier: 10.1214/aoms/1177698965

Rights: Copyright © 1967 Institute of Mathematical Statistics

Vol.38 • No. 2 • April, 1967
Back to Top