EconPapers    
Economics at your fingertips  
 

Learning with minimal information in continuous games

Sebastian Bervoets, Mario Bravo () and Mathieu Faure ()
Additional contact information
Mario Bravo: USACH - Universidad de Santiago de Chile [Santiago]

Post-Print from HAL

Abstract: While payoff-based learning models are almost exclusively devised for finite action games, where players can test every action, it is harder to design such learning processes for continuous games. We construct a stochastic learning rule, designed for games with continuous action sets, which requires no sophistication from the players and is simple to implement: players update their actions according to variations in own payoff between current and previous action. We then analyze its behavior in several classes of continuous games and show that convergence to a stable Nash equilibrium is guaranteed in all games with strategic complements as well as in concave games, while convergence to Nash occurs in all locally ordinal potential games as soon as Nash equilibria are isolated.

Keywords: Payoff-based learning; continuous games; stochastic approximation (search for similar items in EconPapers)
Date: 2020-11
References: Add references at CitEc
Citations: View citations in EconPapers (3)

Published in Theoretical Economics, 2020, 15 (4), pp.1471-1508. ⟨10.3982/TE3435⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-02534257

DOI: 10.3982/TE3435

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-19
Handle: RePEc:hal:journl:hal-02534257