EconPapers    
Economics at your fingertips  
 

Contextual Bandits with Cross-Learning

Santiago Balseiro (), Negin Golrezaei (), Mohammad Mahdian (), Vahab Mirrokni () and Jon Schneider ()
Additional contact information
Santiago Balseiro: Columbia Business School, Columbia University, New York 10027
Negin Golrezaei: Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Mohammad Mahdian: Google Research, New York, New York 10011
Vahab Mirrokni: Google Research, New York, New York 10011
Jon Schneider: Google Research, New York, New York 10011

Mathematics of Operations Research, 2023, vol. 48, issue 3, 1607-1629

Abstract: In the classic contextual bandits problem, in each round t , a learner observes some context c , chooses some action i to perform, and receives some reward r i , t ( c ) . We consider the variant of this problem in which in addition to receiving the reward r i , t ( c ) , the learner also learns the values of r i , t ( c ′ ) for some other contexts c ′ in set O i ( c ) , that is, the rewards that would be achieved by performing that action under different contexts c ′ ∈ O i ( c ) . This variant arises in several strategic settings, such as learning how to bid in nontruthful repeated auctions, which has gained a lot of attention lately as many platforms have switched to running first price auctions. We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classic contextual bandits problem achieve O ˜ ( CKT ) regret against all stationary policies, in which C is the number of contexts, K the number of actions, and T the number of rounds. We design and analyze new algorithms for the contextual bandits problem with cross-learning and show that their regret has better dependence on the number of contexts. Under complete cross-learning in which the rewards for all contexts are learned when choosing an action, that is, set O i ( c ) contains all contexts, we show that our algorithms achieve regret O ˜ ( K T ) , removing the dependence on C . For any other cases, that is, under partial cross-learning in which | O i ( c ) | < C for some context–action pair of ( i , c ), the regret bounds depend on how the sets O i ( c ) impact the degree to which cross-learning between contexts is possible. We simulate our algorithms on real auction data from an ad exchange running first price auctions and show that they outperform traditional contextual bandit algorithms.

Keywords: Primary: 68W40; secondary: 91B26; contextual bandits; cross-learning; bidding; first price auctions (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/moor.2022.1313 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormoor:v:48:y:2023:i:3:p:1607-1629

Access Statistics for this article

More articles in Mathematics of Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormoor:v:48:y:2023:i:3:p:1607-1629