Stochastic Learning Dynamics and Speed of Convergence in Population Games
Itai Arieli and
H. Young
No 570, Economics Series Working Papers from University of Oxford, Department of Economics
Abstract:
Consider a finite, normal form game G in which each player position is occupied by a population of N individuals, and the payoff to any given individual is the expected payoff from playing against a group drawn at random from the other positions. Assume that individuals adjust their behavior asynchronously via a stochastic better reply dynamic. We show that when G is weakly acyclic, convergence occurs with probability one, but the expected waiting time to come close to Nash equilibrium can grow exponentially in N. Unlike previous results in the literature our results show that Nash convergence can be exponentially slow even in games with very simple payoff structures. We then show that the introduction of aggregate shocks to players' information and/or payoffs can greatly accelerate the learning process. In fact, if G is weakly acyclic and the payoffs are generic, the expected waiting time to come e-close to Nash equilibrium is bounded by a function that is polynomial e-1, exponential in the number of strategies, and independent of the population size N.
Date: 2011-09-01
New Economics Papers: this item is included in nep-evo and nep-gth
References: Add references at CitEc
Citations: View citations in EconPapers (1)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oxf:wpaper:570
Access Statistics for this paper
More papers in Economics Series Working Papers from University of Oxford, Department of Economics Contact information at EDIRC.
Bibliographic data for series maintained by Anne Pouliquen (facultyadmin@economics.ox.ac.uk this e-mail address is bad, please contact repec@repec.org).