EconPapers    
Economics at your fingertips  
 

Learned Collusion

Olivier Compte
Additional contact information
Olivier Compte: Paris School of Economics

Papers from arXiv.org

Abstract: Q-learning can be described as an all-purpose automaton that provides estimates (Q-values) of the continuation values associated with each available action and follows the naive policy of almost always choosing the action with highest Q-value. We consider a family of automata based on Q-values, whose policy may systematically favor some actions over others, for example through a bias that favors cooperation. We look for stable equilibrium biases, easily learned under converging logit/best-response dynamics over biases, not requiring any tacit agreement. These biases strongly foster collusion or cooperation across a rich array of payoff and monitoring structures, independently of initial Q-values.

Date: 2023-04, Revised 2025-05
New Economics Papers: this item is included in nep-des, nep-gth and nep-mic
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2304.12647 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2304.12647

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-05-29
Handle: RePEc:arx:papers:2304.12647