A Model of Adaptive Reinforcement Learning
Julian Romero and
Yaroslav Rosokha
Purdue University Economics Working Papers from Purdue University, Department of Economics
Abstract:
We develop a model of learning that extends the classic models of reinforcement learning to a continuous, multidimensional strategy space. The model takes advantage of the recent approximation methods to tackle the curse of dimensionality inherent to a traditional discretization approach. Crucially, the model endogenously partitions strategies into sets of similar strategies, and allows agents to learn over these sets which speeds up the learning process. We provide an application of our model to predict which memory-1 mixed strategies will be played in the indefinitely repeated Prisoner's Dilemma game. We show that despite allowing the mixed strategies, strategies close to the pure strategies always defect, grim trigger, and tit-for-tat emerge { a result that qualitatively matches recent strategy choice experiments with human subjects.
Keywords: Reinforcement Learning; Repeated-game Strategies; Repeated Prisoner's Dilemma; Mixed Strategies; Agent-based Models; Markov Strategies (search for similar items in EconPapers)
Pages: 17 pages
Date: 2019-03
References: Add references at CitEc
Citations:
Downloads: (external link)
https://business.purdue.edu/research/working-papers-series/2024/1343.pdf (application/pdf)
Our link check indicates that this URL is bad, the error code is: 404 Not Found
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pur:prukra:1343
Access Statistics for this paper
More papers in Purdue University Economics Working Papers from Purdue University, Department of Economics Contact information at EDIRC.
Bibliographic data for series maintained by Business PHD ().