Reinforcement Learning Rules in a Repeated Game
Ann Maria Bell
Computational Economics, 2001, vol. 18, issue 1, 89-110
Abstract:
This paper examines the performance of simple reinforcement learning algorithms in a stationary environment and in a repeated game where the environment evolves endogenously based on the actions of other agents. Some types of reinforcement learning rules can be extremely sensitive to small changes in the initial conditions, consequently, events early in a simulation can affect the performance of the rule over a relatively long time horizon. However, when multiple adaptive agents interact, algorithms that performed poorly in a stationary environment often converge rapidly to a stable aggregate behaviors despite the slow and erratic behavior of individual learners. Algorithms that are robust in stationary environments can exhibit slow convergence in an evolving environment. Copyright 2001 by Kluwer Academic Publishers
Date: 2001
References: Add references at CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
http://journals.kluweronline.com/issn/0927-7099/contents (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:kap:compec:v:18:y:2001:i:1:p:89-110
Ordering information: This journal article can be ordered from
http://www.springer. ... ry/journal/10614/PS2
Access Statistics for this article
Computational Economics is currently edited by Hans Amman
More articles in Computational Economics from Springer, Society for Computational Economics Contact information at EDIRC.
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().