Exploring cooperative evolution with tunable payoff’s loners using reinforcement learning
Huizhen Zhang,
Tianbo An,
Pingping Yan,
Kaipeng Hu,
Jinjin An,
Lijuan Shi,
Jian Zhao and
Jingrui Wang
Chaos, Solitons & Fractals, 2024, vol. 178, issue C
Abstract:
Imitation and replication have emerged as a paradigm in numerous studies that explore the evolution of cooperative behavior. Since they embrace the essence of natural selection, it is widely recognized in exploring the evolution of biological behaviors. However, it is not easy to express the way individuals select and optimize in these simple and elegant ways in the complex and variable interactive environments. Currently, reinforcement learning is widely used in the study of strategy updating dynamics and agent learning processes in game theory. Therefore, we introduce the Q-learning algorithms into the voluntary public goods game to explore the impact of cooperative evolution. Simulation results demonstrate that when the synergy factor is large and the adjust loner payoff’s multiply factor is smaller, the number of cooperators becomes gradually consistent. As the synergy factor increases, the evolution of the proportion of defectors become nonlinear. Furthermore, we further explore the Q-table and strategy updating processes of agents in the steady state under a smaller multiply factor that adjusts the loner’s payoff. The results find inconsistency between the average Q-values and the steady state population strategy distribution. Subsequently, we explain the reason for the inconsistency by analyzing strategy sequence, namely that there are a number of agents who constantly change strategies in the population, and the Q-values of these agents have an impact on the overall Q-values. In addition, evolutionary snapshots of agent strategy sequences are observed. The results find that the agent’s strategic selection shows greater instability when the proportions of cooperators, defectors, and loners in the population are relatively balanced. Finally, the effect of parameters in the Q-learning algorithm on cooperative behavior is analyzed. This study hopes to provide valuable insights into understanding the dynamics of cooperation in complex social interactions.
Keywords: Public goods game; Self-regarding Q-learning; Human cooperation; Loner (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077923012602
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:178:y:2024:i:c:s0960077923012602
DOI: 10.1016/j.chaos.2023.114358
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().