Open Access
August 2009 Woodroofe’s one-armed bandit problem revisited
Alexander Goldenshluger, Assaf Zeevi
Ann. Appl. Probab. 19(4): 1603-1633 (August 2009). DOI: 10.1214/08-AAP589

Abstract

We consider the one-armed bandit problem of Woodroofe [J. Amer. Statist. Assoc. 74 (1979) 799–806], which involves sequential sampling from two populations: one whose characteristics are known, and one which depends on an unknown parameter and incorporates a covariate. The goal is to maximize cumulative expected reward. We study this problem in a minimax setting, and develop rate-optimal polices that involve suitable modifications of the myopic rule. It is shown that the regret, as well as the rate of sampling from the inferior population, can be finite or grow at various rates with the time horizon of the problem, depending on “local” properties of the covariate distribution. Proofs rely on martingale methods and information theoretic arguments.

Citation

Download Citation

Alexander Goldenshluger. Assaf Zeevi. "Woodroofe’s one-armed bandit problem revisited." Ann. Appl. Probab. 19 (4) 1603 - 1633, August 2009. https://doi.org/10.1214/08-AAP589

Information

Published: August 2009
First available in Project Euclid: 27 July 2009

zbMATH: 1168.62071
MathSciNet: MR2538082
Digital Object Identifier: 10.1214/08-AAP589

Subjects:
Primary: 62L05
Secondary: 60G40 , 62C20

Keywords: bandit problems , estimation , inferior sampling rate , minimax , online learning , rate-optimal policy , regret , Sequential allocation

Rights: Copyright © 2009 Institute of Mathematical Statistics

Vol.19 • No. 4 • August 2009
Back to Top