PPO-ACT: Proximal policy optimization with adversarial curriculum transfer for spatial public goods games
Zhaoqilin Yang,
Chanchan Li,
Xin Wang and
Youliang Tian
Chaos, Solitons & Fractals, 2025, vol. 199, issue P2
Abstract:
This study investigates cooperation evolution mechanisms in the spatial public goods game. A novel deep reinforcement learning framework, Proximal Policy Optimization with Adversarial Curriculum Transfer (PPO-ACT), is proposed to model agent strategy optimization in dynamic environments. Traditional evolutionary game models often exhibit limitations in modeling long-term decision-making processes. Imitation-based rules (e.g., Fermi) lack strategic foresight, while tabular methods (e.g., Q-learning) fail to capture spatial–temporal correlations. Deep reinforcement learning effectively addresses these limitation by bridging policy gradient methods with evolutionary game theory. Our study pioneers the application of proximal policy optimization’s continuous strategy optimization capability to public goods games through a two-stage adversarial curriculum transfer training paradigm. The experimental results show that PPO-ACT performs better in critical enhancement factor regimes. Compared to conventional standard proximal policy optimization methods, Q-learning and Fermi update rules, achieve earlier cooperation phase transitions and maintain stable cooperative equilibria. This framework exhibits better robustness when handling challenging scenarios like all-defector initial conditions. Systematic comparisons reveal the unique advantage of policy gradient methods in population-scale cooperation, i.e., achieving spatiotemporal payoff coordination through value function propagation. Our work provides a new computational framework for studying cooperation emergence in complex systems, algorithmically validating the punishment promotes cooperation hypothesis while offering methodological insights for multi-agent system strategy design.
Keywords: Public goods game; Deep reinforcement learning; Proximal policy optimization; Adversarial curriculum transfer (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077925007751
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:199:y:2025:i:p2:s0960077925007751
DOI: 10.1016/j.chaos.2025.116762
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().