Annals of Statistics
- Ann. Statist.
- Volume 41, Number 2 (2013), 693-721.
The multi-armed bandit problem with covariates
We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem, this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a nonparametric model where the expected rewards are smooth functions of the covariate and where the hardness of the problem is captured by a margin parameter. To maximize the expected cumulative reward, we introduce a policy called Adaptively Binned Successive Elimination (ABSE) that adaptively decomposes the global problem into suitably “localized” static bandit problems. This policy constructs an adaptive partition using a variant of the Successive Elimination (SE) policy. Our results include sharper regret bounds for the SE policy in a static bandit problem and minimax optimal regret bounds for the ABSE policy in the dynamic problem.
Ann. Statist., Volume 41, Number 2 (2013), 693-721.
First available in Project Euclid: 26 April 2013
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Perchet, Vianney; Rigollet, Philippe. The multi-armed bandit problem with covariates. Ann. Statist. 41 (2013), no. 2, 693--721. doi:10.1214/13-AOS1101. https://projecteuclid.org/euclid.aos/1366980562