EconPapers    
Economics at your fingertips  
 

Bayesian Exploration: Incentivizing Exploration in Bayesian Games

Yishay Mansour (), Alex Slivkins (), Vasilis Syrgkanis () and Zhiwei Steven Wu ()
Additional contact information
Yishay Mansour: Tel Aviv University, Tel Aviv, Israel; Google, Tel Aviv, Israel
Alex Slivkins: Microsoft Research, New York, New York 10012
Vasilis Syrgkanis: Microsoft Research, Cambridge, Massachusetts 02142
Zhiwei Steven Wu: Carnegie Mellon University, Pittsburgh, Pennsylvania 15213

Operations Research, 2022, vol. 70, issue 2, 1105-1127

Abstract: We consider a ubiquitous scenario in the internet economy when individual decision makers (henceforth, agents ) both produce and consume information as they make strategic choices in an uncertain environment. This creates a three-way trade-off between exploration (trying out insufficiently explored alternatives to help others in the future), exploitation (making optimal decisions given the information discovered by other agents), and incentives of the agents (who are myopically interested in exploitation while preferring the others to explore). We posit a principal who controls the flow of information from agents that came before to the ones that arrive later and strives to coordinate the agents toward a socially optimal balance between exploration and exploitation, not using any monetary transfers. The goal is to design a recommendation policy for the principal that respects agents’ incentives and minimizes a suitable notion of regret . We extend prior work in this direction to allow the agents to interact with one another in a shared environment: at each time step, multiple agents arrive to play a Bayesian game , receive recommendations, choose their actions, receive their payoffs, and then leave the game forever. The agents now face two sources of uncertainty: the actions of the other agents and the parameters of the uncertain game environment. Our main contribution is to show that the principal can achieve constant regret when the utilities are deterministic (the constant depends on the prior distribution but not on the time horizon) and logarithmic regret when the utilities are stochastic. As a key technical tool, we introduce the concept of explorable actions , the actions that some incentive-compatible policy can recommend with nonzero probability. We show how the principal can identify (and explore) all explorable actions and use the revealed information to perform optimally. In particular, our results significantly improve over the prior work on the special case of a single agent per round, which relies on assumptions to guarantee that all actions are explorable. Interestingly, we do not require the principal’s utility to be aligned with the cumulative utility of the agents; instead, the principal can optimize an arbitrary notion of per-round reward.

Keywords: Revenue Management and Market Analytics; information design; Bayesian persuasion; multiarmed bandits; exploration–exploitation trade-off; Bayesian incentive compatibility; Bayes-correlated equilibrium; Bayesian regret (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2021.2205 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:70:y:2022:i:2:p:1105-1127

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:70:y:2022:i:2:p:1105-1127