March 2016 Optimal learning with non-Gaussian rewards
Zi Ding, Ilya O. Ryzhov
Author Affiliations +
Adv. in Appl. Probab. 48(1): 112-136 (March 2016).

Abstract

We propose a novel theoretical characterization of the optimal 'Gittins index' policy in multi-armed bandit problems with non-Gaussian, infinitely divisible reward distributions. We first construct a continuous-time, conditional Lévy process which probabilistically interpolates the sequence of discrete-time rewards. When the rewards are Gaussian, this approach enables an easy connection to the convenient time-change properties of a Brownian motion. Although no such device is available in general for the non-Gaussian case, we use optimal stopping theory to characterize the value of the optimal policy as the solution to a free-boundary partial integro-differential equation (PIDE). We provide the free-boundary PIDE in explicit form under the specific settings of exponential and Poisson rewards. We also prove continuity and monotonicity properties of the Gittins index in these two problems, and discuss how the PIDE can be solved numerically to find the optimal index value of a given belief state.

Citation

Download Citation

Zi Ding. Ilya O. Ryzhov. "Optimal learning with non-Gaussian rewards." Adv. in Appl. Probab. 48 (1) 112 - 136, March 2016.

Information

Published: March 2016
First available in Project Euclid: 8 March 2016

zbMATH: 1345.60039
MathSciNet: MR3473570

Subjects:
Primary: 60G40
Secondary: 60J75

Keywords: Gittins indices , multi-armed bandit , non-Gaussian rewards , optimal learning , probabilistic interpolation

Rights: Copyright © 2016 Applied Probability Trust

JOURNAL ARTICLE
25 PAGES

This article is only available to subscribers.
It is not available for individual sale.
+ SAVE TO MY LIBRARY

Vol.48 • No. 1 • March 2016
Back to Top