Proximal policy optimization approach to stabilize the chaotic food web system
Liang Xu,
Ru-Ru Ma,
Jie Wu and
Pengchun Rao
Chaos, Solitons & Fractals, 2025, vol. 192, issue C
Abstract:
Chaos phenomena can be observed extensively in many real-world scenarios, which usually presents a challenge to suppress those undesired behaviors. Unlike the traditional linear and nonlinear control methods, this study introduces a deep reinforcement learning (DRL)-based scheme to regulate chaotic food web system (FWS). Specifically, we utilize the proximal policy optimization (PPO) algorithm to train the agent model, which does not necessitate the prior knowledge of chaotic FWS. Experimental results demonstrate that the developed DRL-based control scheme can effectively guide the FWS toward a predetermined stable state. Furthermore, this investigation considers the influence of environmental noise on the chaotic FWS, and we obtain the important result that incorporating noise during the training process can enhance the controller’s robustness and system adaptability.
Keywords: Chaos; Stability; Deep reinforcement learning; Food web system; Proximal policy optimization algorithm (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077925000463
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:192:y:2025:i:c:s0960077925000463
DOI: 10.1016/j.chaos.2025.116033
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().