Deep Learning to Play Games
Daniele Condorelli and
Massimiliano Furlan
Papers from arXiv.org
Abstract:
We train two neural networks adversarially to play normal-form games. At each iteration, a row and column network take a new randomly generated game and output individual mixed strategies. The parameters of each network are independently updated via stochastic gradient descent to minimize expected regret given the opponent's strategy. Our simulations demonstrate that the joint behavior of the networks converges to strategies close to Nash equilibria in almost all games. For all $2 \times 2$ and in 80% of $3 \times 3$ games with multiple equilibria, the networks select the risk-dominant equilibrium. Our results show how Nash equilibrium emerges from learning across heterogeneous games.
Date: 2024-09
New Economics Papers: this item is included in nep-big, nep-cmp, nep-gth and nep-net
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2409.15197 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2409.15197
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().