Open Access
April 2018 On Bayesian index policies for sequential resource allocation
Emilie Kaufmann
Ann. Statist. 46(2): 842-865 (April 2018). DOI: 10.1214/17-AOS1569

Abstract

This paper is about index policies for minimizing (frequentist) regret in a stochastic multi-armed bandit model, inspired by a Bayesian view on the problem. Our main contribution is to prove that the Bayes-UCB algorithm, which relies on quantiles of posterior distributions, is asymptotically optimal when the reward distributions belong to a one-dimensional exponential family, for a large class of prior distributions. We also show that the Bayesian literature gives new insight on what kind of exploration rates could be used in frequentist, UCB-type algorithms. Indeed, approximations of the Bayesian optimal solution or the Finite-Horizon Gittins indices provide a justification for the kl-UCB$^{+}$ and kl-UCB-H$^{+}$ algorithms, whose asymptotic optimality is also established.

Citation

Download Citation

Emilie Kaufmann. "On Bayesian index policies for sequential resource allocation." Ann. Statist. 46 (2) 842 - 865, April 2018. https://doi.org/10.1214/17-AOS1569

Information

Received: 1 September 2016; Revised: 1 March 2017; Published: April 2018
First available in Project Euclid: 3 April 2018

zbMATH: 06870281
MathSciNet: MR3782386
Digital Object Identifier: 10.1214/17-AOS1569

Subjects:
Primary: 62L05

Keywords: Bayesian methods , Gittins indices , Multi-armed bandit problems , upper-confidence bounds

Rights: Copyright © 2018 Institute of Mathematical Statistics

Vol.46 • No. 2 • April 2018
Back to Top