Disturbance estimator-based reinforcement learning robust stabilization control for a class of chaotic systems
Keyi Li,
Hongsheng Sha and
Rongwei Guo
Chaos, Solitons & Fractals, 2025, vol. 198, issue C
Abstract:
In the study, a novel optimal control tactics is developed for the stabilization of a class of chaotic systems. This strategy is depended on the positive gradient descent training mode and provides a critic-actor reinforcement learning (RL) algorithm, where the critic network is accustomed to approximate the nonlinear Hamilton–Jacobi–Bellman equation obtained from the outstanding performance evaluation index function with model uncertainties. The optimal controller is obtained by a network of actors, which includes a disturbance estimator (DE) as an observer composed of specially designed filters that can accurately suppress specified external disturbances. The entire system optimization process does not require persistent excitation (PE) of signal input. Then, a Lyapunov analysis method is provided to give a comprehensive assessment of system stability and optimal control performance. Last, the efficacy of the proposed controller approach is confirmed through simulation experiments.
Keywords: Chaotic systems; Optimal control; Reinforcement learning; Disturbance estimator (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077925005600
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:198:y:2025:i:c:s0960077925005600
DOI: 10.1016/j.chaos.2025.116547
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().