Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms
Prashant Trivedi () and
Nandyala Hemachandra ()
Additional contact information
Prashant Trivedi: Industrial Engineering and Operations Research IIT Bombay
Nandyala Hemachandra: Industrial Engineering and Operations Research IIT Bombay
Dynamic Games and Applications, 2023, vol. 13, issue 1, No 3, 25-55
Abstract:
Abstract Multi-agent actor-critic algorithms are an important part of the Reinforcement Learning (RL) paradigm. We propose three fully decentralized multi-agent natural actor-critic (MAN) algorithms in this work. The objective is to collectively find a joint policy that maximizes the average long-term return of these agents. In the absence of a central controller and to preserve privacy, agents communicate some information to their neighbors via a time-varying communication network. We prove convergence of all the three MAN algorithms to a globally asymptotically stable set of the ODE corresponding to actor update; these use linear function approximations. We show that the Kullback–Leibler divergence between policies of successive iterates is proportional to the objective function’s gradient. We observe that the minimum singular value of the Fisher information matrix is well within the reciprocal of the policy parameter dimension. Using this, we theoretically show that the optimal value of the deterministic variant of the MAN algorithm at each iterate dominates that of the standard gradient-based multi-agent actor-critic (MAAC) algorithm. To our knowledge, it is the first such result in multi-agent reinforcement learning (MARL). To illustrate the usefulness of our proposed algorithms, we implement them on a bi-lane traffic network to reduce the average network congestion. We observe an almost 25% reduction in the average congestion in 2 MAN algorithms; the average congestion in another MAN algorithm is on par with the MAAC algorithm. We also consider a generic 15 agent MARL; the performance of the MAN algorithms is again as good as the MAAC algorithm.
Keywords: Natural Gradients; Actor-Critic Methods; Networked Agents; Traffic Network Control; Stochastic Approximations; Function Approximations; Fisher Information Matrix; Non-Convex Optimization; Quasi second-order methods; Local optima value comparison; Algorithms for better local minima (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s13235-022-00449-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:dyngam:v:13:y:2023:i:1:d:10.1007_s13235-022-00449-9
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/13235
DOI: 10.1007/s13235-022-00449-9
Access Statistics for this article
Dynamic Games and Applications is currently edited by Georges Zaccour
More articles in Dynamic Games and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().