EconPapers    
Economics at your fingertips  
 

Computational Performance of Deep Reinforcement Learning to find Nash Equilibria

Christoph Graf, Viktor Zobernig, Johannes Schmidt and Claude Kl\"ockl

Papers from arXiv.org

Abstract: We test the performance of deep deterministic policy gradient (DDPG), a deep reinforcement learning algorithm, able to handle continuous state and action spaces, to learn Nash equilibria in a setting where firms compete in prices. These algorithms are typically considered model-free because they do not require transition probability functions (as in e.g., Markov games) or predefined functional forms. Despite being model-free, a large set of parameters are utilized in various steps of the algorithm. These are e.g., learning rates, memory buffers, state-space dimensioning, normalizations, or noise decay rates and the purpose of this work is to systematically test the effect of these parameter configurations on convergence to the analytically derived Bertrand equilibrium. We find parameter choices that can reach convergence rates of up to 99%. The reliable convergence may make the method a useful tool to study strategic behavior of firms even in more complex settings. Keywords: Bertrand Equilibrium, Competition in Uniform Price Auctions, Deep Deterministic Policy Gradient Algorithm, Parameter Sensitivity Analysis

Date: 2021-04
New Economics Papers: this item is included in nep-big, nep-cmp, nep-cwa and nep-gth
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2104.12895 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2104.12895

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators (help@arxiv.org).

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2104.12895