X-Armed Bandits
Sébastien Bubeck (),
Rémi Munos (),
Gilles Stoltz and
Csaba Szepesvari ()
Additional contact information
Sébastien Bubeck: SEQUEL - Sequential Learning - LIFL - Laboratoire d'Informatique Fondamentale de Lille - Université de Lille, Sciences et Technologies - Inria - Institut National de Recherche en Informatique et en Automatique - Université de Lille, Sciences Humaines et Sociales - CNRS - Centre National de la Recherche Scientifique - Centre Inria de l'Université de Lille - Inria - Institut National de Recherche en Informatique et en Automatique - LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal - Université de Lille, Sciences et Technologies - Centrale Lille - CNRS - Centre National de la Recherche Scientifique
Rémi Munos: SEQUEL - Sequential Learning - LIFL - Laboratoire d'Informatique Fondamentale de Lille - Université de Lille, Sciences et Technologies - Inria - Institut National de Recherche en Informatique et en Automatique - Université de Lille, Sciences Humaines et Sociales - CNRS - Centre National de la Recherche Scientifique - Centre Inria de l'Université de Lille - Inria - Institut National de Recherche en Informatique et en Automatique - LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal - Université de Lille, Sciences et Technologies - Centrale Lille - CNRS - Centre National de la Recherche Scientifique
Csaba Szepesvari: Department of Computing Science [Edmonton] - University of Alberta
Post-Print from HAL
Abstract:
We consider a generalization of stochastic bandits where the set of arms, $\cX$, is allowed to be a generic measurable space and the mean-payoff function is ''locally Lipschitz'' with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if $\cX$ is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by $\sqrt{n}$, i.e., the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.
Keywords: bandits with infinitely many arms; optimistic online optimization; regret bounds; minimax rates (search for similar items in EconPapers)
Date: 2011-04-19
Note: View the original document on HAL open archive server: https://hal.science/hal-00450235v2
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)
Published in Journal of Machine Learning Research, 2011, 12, pp.1655-1695
Downloads: (external link)
https://hal.science/hal-00450235v2/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-00450235
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().