Optimistic Gittins Indices
Vivek F. Farias () and
Eli Gutin ()
Additional contact information
Vivek F. Farias: Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02142
Eli Gutin: Uber Technologies, Inc., San Francisco, California 94518
Operations Research, 2022, vol. 70, issue 6, 3432-3456
Abstract:
Recent years have seen a resurgence of interest in Bayesian algorithms for the multiarmed bandit (MAB) problem, such as Thompson sampling. These algorithms seek to exploit prior information on arm biases. The empirically observed performance of these algorithms makes them a compelling alternative to their frequentist counterparts. Nonetheless, there appears to be a wide range in empirical performance among such Bayesian algorithms. These algorithms also vary substantially in their design (as opposed to being variations on a theme). In contrast, if one cared about Bayesian regret discounted over an infinite horizon at a fixed, prespecified rate, the celebrated Gittins index theorem offers an optimal algorithm. Unfortunately, the Gittins analysis does not appear to carry over to minimizing Bayesian regret over all sufficiently large horizons and computing a Gittins index is onerous relative to essentially any incumbent index scheme for the Bayesian MAB problem. The present paper proposes a tightening sequence of optimistic approximations to the Gittins index. We show that the use of these approximations in concert with the use of an increasing discount factor appears to offer a compelling alternative to state-of-the-art index schemes proposed for the Bayesian MAB problem in recent years. We prove that these optimistic indices constitute a regret optimal algorithm, in the sense of meeting the Lai-Robbins lower bound, including matching constants. Perhaps more interestingly, the use of even the loosest of these approximations appears to offer substantial performance improvements over state-of-the-art alternatives (including Thompson sampling, information direct sampling, and the Bayes UCB algorithm) while incurring little to no additional computational overhead relative to the simplest of these alternatives.
Keywords: Decision Analysis; multiarmed bandits; Gittins index; online learning (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2021.2207 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:70:y:2022:i:6:p:3432-3456
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().