Disentangling Exploration from Exploitation
Alessandro Lizzeri,
Eran Shmaya and
Leeat Yariv
Papers from arXiv.org
Abstract:
Starting from Robbins (1952), the literature on experimentation via multi-armed bandits has wed exploration and exploitation. Nonetheless, in many applications, agents' exploration and exploitation need not be intertwined: a policymaker may assess new policies different than the status quo; an investor may evaluate projects outside her portfolio. We characterize the optimal experimentation policy when exploration and exploitation are disentangled in the case of Poisson bandits, allowing for general news structures. The optimal policy features complete learning asymptotically, exhibits lots of persistence, but cannot be identified by an index a la Gittins. Disentanglement is particularly valuable for intermediate parameter values.
Date: 2024-04
New Economics Papers: this item is included in nep-mic and nep-ppm
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2404.19116 Latest version (application/pdf)
Related works:
Working Paper: Disentangling Exploration from Exploitation (2024) 
Working Paper: Disentangling Exploration from Exploitation (2024) 
Working Paper: Disentangling Exploration from Exploitation (2024) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2404.19116
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().