Open Access
February 2007 Reading policies for joins: An asymptotic analysis
Ralph P. Russo, Nariankadu D. Shyamalkumar
Ann. Appl. Probab. 17(1): 230-264 (February 2007). DOI: 10.1214/105051606000000646

Abstract

Suppose that mn observations are made from the distribution R and nmn from the distribution S. Associate with each pair, x from R and y from S, a nonnegative score ϕ(x, y). An optimal reading policy is one that yields a sequence mn that maximizes $\mathbb{E}(M(n))$, the expected sum of the (nmn)mn observed scores, uniformly in n. The alternating policy, which switches between the two sources, is the optimal nonadaptive policy. In contrast, the greedy policy, which chooses its source to maximize the expected gain on the next step, is shown to be the optimal policy. Asymptotics are provided for the case where the R and S distributions are discrete and ϕ(x, y)=1 or 0 according as x=y or not (i.e., the observations match). Specifically, an invariance result is proved which guarantees that for a wide class of policies, including the alternating and the greedy, the variable M(n) obeys the same CLT and LIL. A more delicate analysis of the sequence $\mathbb{E}(M(n))$ and the sample paths of M(n), for both alternating and greedy, reveals the slender sense in which the latter policy is asymptotically superior to the former, as well as a sense of equivalence of the two and robustness of the former.

Citation

Download Citation

Ralph P. Russo. Nariankadu D. Shyamalkumar. "Reading policies for joins: An asymptotic analysis." Ann. Appl. Probab. 17 (1) 230 - 264, February 2007. https://doi.org/10.1214/105051606000000646

Information

Published: February 2007
First available in Project Euclid: 13 February 2007

zbMATH: 1163.90789
MathSciNet: MR2292586
Digital Object Identifier: 10.1214/105051606000000646

Subjects:
Primary: 90C40
Secondary: 60F05 , 60F15 , 60G40

Keywords: bandit problems , greedy policies , Markov decision process , tax problem

Rights: Copyright © 2007 Institute of Mathematical Statistics

Vol.17 • No. 1 • February 2007
Back to Top