Advances in Applied Probability
- Adv. in Appl. Probab.
- Volume 48, Number 1 (2016), 112-136.
Optimal learning with non-Gaussian rewards
We propose a novel theoretical characterization of the optimal 'Gittins index' policy in multi-armed bandit problems with non-Gaussian, infinitely divisible reward distributions. We first construct a continuous-time, conditional Lévy process which probabilistically interpolates the sequence of discrete-time rewards. When the rewards are Gaussian, this approach enables an easy connection to the convenient time-change properties of a Brownian motion. Although no such device is available in general for the non-Gaussian case, we use optimal stopping theory to characterize the value of the optimal policy as the solution to a free-boundary partial integro-differential equation (PIDE). We provide the free-boundary PIDE in explicit form under the specific settings of exponential and Poisson rewards. We also prove continuity and monotonicity properties of the Gittins index in these two problems, and discuss how the PIDE can be solved numerically to find the optimal index value of a given belief state.
Adv. in Appl. Probab., Volume 48, Number 1 (2016), 112-136.
First available in Project Euclid: 8 March 2016
Permanent link to this document
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60]
Secondary: 60J75: Jump processes
Ding, Zi; Ryzhov, Ilya O. Optimal learning with non-Gaussian rewards. Adv. in Appl. Probab. 48 (2016), no. 1, 112--136. https://projecteuclid.org/euclid.aap/1457466158