Improvements and Generalizations of Stochastic Knapsack and Markovian Bandits Approximation Algorithms
Will Ma ()
Additional contact information
Will Ma: Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Mathematics of Operations Research, 2018, vol. 43, issue 3, 789-812
Abstract:
We study the multi-armed bandit problem with arms which are Markov chains with rewards. In the finite-horizon setting, the celebrated Gittins indices do not apply, and the exact solution is intractable. We provide approximation algorithms for the general model of Markov decision processes with nonunit transition times. When preemption is allowed, we provide a (1/2 − ε )-approximation, along with an example showing this is tight. When preemption isn’t allowed, we provide a 1/12-approximation, which improves to a 4/27-approximation when transition times are unity. Our model captures the Markovian Bandits model of Gupta et al., the Stochastic Knapsack model of Dean et al., and the Budgeted Learning model of Guha and Munagala. Our algorithms improve existing results in all three areas. In our analysis, we encounter and overcome to our knowledge a new obstacle: an algorithm that provably exists via analytical arguments, but cannot be found in polynomial time
Keywords: approximation algorithms; stochastic knapsack; Markovian multi-armed bandit; stochastic programming (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://doi.org/10.1287/moor.2017.0884 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormoor:v:43:y:2018:i:3:p:789-812
Access Statistics for this article
More articles in Mathematics of Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().