Linear Program-Based Policies for Restless Bandits: Necessary and Sufficient Conditions for (Exponentially Fast) Asymptotic Optimality
Nicolas Gast (),
Bruno Gaujal () and
Chen Yan ()
Additional contact information
Nicolas Gast: University of Grenoble Alpes, Institut national de recherche en informatique et en automatique, Centre national de la recherche scientifique, Grenoble Institut national polytechnique, Laboratoire d’informatique de Grenoble, 38000 Grenoble, France
Bruno Gaujal: University of Grenoble Alpes, Institut national de recherche en informatique et en automatique, Centre national de la recherche scientifique, Grenoble Institut national polytechnique, Laboratoire d’informatique de Grenoble, 38000 Grenoble, France
Chen Yan: STATIFY, Institut national de recherche en informatique et en automatique, 38334 Saint Ismier, France; Biostatistics and Spatial Processes, Institut national de recherche pour l’agriculture, l’alimentation et l’environnement, 84914 Avignon, France
Mathematics of Operations Research, 2024, vol. 49, issue 4, 2468-2491
Abstract:
We provide a framework to analyze control policies for the restless Markovian bandit model under both finite and infinite time horizons. We show that when the population of arms goes to infinity, the value of the optimal control policy converges to the solution of a linear program (LP). We provide necessary and sufficient conditions for a generic control policy to be (i) asymptotically optimal, (ii) asymptotically optimal with square root convergence rate, and (iii) asymptotically optimal with exponential rate. We then construct the LP-index policy that is asymptotically optimal with square root convergence rate on all models and with exponential rate if the model is nondegenerate in finite horizon and satisfies a uniform global attractor property in infinite horizon. We next define the LP-update policy, which is essentially a repeated LP-index policy that solves a new LP at each decision epoch. We conclude by providing numerical experiments to compare the efficiency of different LP-based policies.
Keywords: Primary: 90C40; secondary: 90C05; 90B99; restless bandits; linear programming; Markov decision processes (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/moor.2022.0101 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormoor:v:49:y:2024:i:4:p:2468-2491
Access Statistics for this article
More articles in Mathematics of Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().