A Myopic Adjustment Process for Mean Field Games with Finite State and Action Space
Berenice Anne Neumann
Papers from arXiv.org
Abstract:
In this paper, we introduce a natural learning rule for mean field games with finite state and action space, the so-called myopic adjustment process. The main motivation for these considerations are the complex computations necessary to determine dynamic mean-field equilibria, which make it seem questionable whether agents are indeed able to play these equilibria. We prove that the myopic adjustment process converges locally towards stationary equilibria with deterministic equilibrium strategies under rather broad conditions. Moreover, for a two-strategy setting, we also obtain a global convergence result under stronger, yet intuitive conditions.
Date: 2020-08
New Economics Papers: this item is included in nep-gth
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2008.13420 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2008.13420
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().