Robustness and Sample Complexity of Model-Based MARL for General-Sum Markov Games
Jayakumar Subramanian (),
Amit Sinha () and
Aditya Mahajan ()
Additional contact information
Jayakumar Subramanian: Adobe Inc.
Amit Sinha: McGill University
Aditya Mahajan: McGill University
Dynamic Games and Applications, 2023, vol. 13, issue 1, No 4, 56-88
Abstract:
Abstract Multi-agent reinforcement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibria in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model-based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of sample complexity for model-based MARL algorithms in general-sum Markov games. We show two results. We first use Hoeffding inequality-based bounds to show that $$\tilde{{\mathcal {O}}}( (1-\gamma )^{-4} \alpha ^{-2})$$ O ~ ( ( 1 - γ ) - 4 α - 2 ) samples per state–action pair are sufficient to obtain a $$\alpha $$ α -approximate Markov perfect equilibrium with high probability, where $$\gamma $$ γ is the discount factor, and the $$\tilde{{\mathcal {O}}}(\cdot )$$ O ~ ( · ) notation hides logarithmic terms. We then use Bernstein inequality-based bounds to show that $$\tilde{{\mathcal {O}}}( (1-\gamma )^{-1} \alpha ^{-2} )$$ O ~ ( ( 1 - γ ) - 1 α - 2 ) samples are sufficient. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example.
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s13235-023-00490-2 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:dyngam:v:13:y:2023:i:1:d:10.1007_s13235-023-00490-2
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/13235
DOI: 10.1007/s13235-023-00490-2
Access Statistics for this article
Dynamic Games and Applications is currently edited by Georges Zaccour
More articles in Dynamic Games and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().