EconPapers    
Economics at your fingertips  
 

Deep Learning Across Games

Daniele Condorelli and Massimiliano Furlan

Papers from arXiv.org

Abstract: We train two neural networks adversarially to play static games. At each iteration, a row and column network observe a new random bimatrix game and output individual mixed strategies. The parameters of each network are independently updated via stochastic gradient descent on a loss defined by the individual squared regret experienced in the game. Simulations show the joint behavior of the trained networks approximates a Nash equilibrium in all games. In $2\times2$ games with multiple equilibria, the networks select the risk dominant equilibrium. These findings, which are robust and generalise out-of-distribution, illustrate how equilibrium emerges from learning across heterogeneous games.

Date: 2024-09, Revised 2025-05
New Economics Papers: this item is included in nep-big, nep-cmp, nep-gth and nep-net
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2409.15197 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2409.15197

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-05-09
Handle: RePEc:arx:papers:2409.15197