Integral Reinforcement Learning-Based Online Adaptive Dynamic Event-Triggered Control Design in Mixed Zero-Sum Games for Unknown Nonlinear Systems
Yuling Liang (),
Zhi Shao,
Hanguang Su,
Lei Liu and
Xiao Mao
Additional contact information
Yuling Liang: School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
Zhi Shao: School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
Hanguang Su: School of Information Science and Engineer, Northeastern University, Shenyang 110819, China
Lei Liu: School of Science, Liaoning University of Technology, Jinzhou 121000, China
Xiao Mao: School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
Mathematics, 2024, vol. 12, issue 24, 1-29
Abstract:
Mixed zero-sum games consider both zero-sum and non-zero-sum differential game problems simultaneously. In this paper, multiplayer mixed zero-sum games (MZSGs) are studied by the means of an integral reinforcement learning (IRL) algorithm under the dynamic event-triggered control (DETC) mechanism for completely unknown nonlinear systems. Firstly, the adaptive dynamic programming (ADP)-based on-policy approach is proposed for solving the MZSG problem for the nonlinear system with multiple players. Secondly, to avoid using dynamic information of the system, a model-free control strategy is developed by utilizing actor–critic neural networks (NNs) for addressing the MZSG problem of unknown systems. On this basis, for the purpose of avoiding wasted communication and computing resources, the dynamic event-triggered mechanism is integrated into the integral reinforcement learning algorithm, in which a dynamic triggering condition is designed to further reduce triggering times. With the help of the Lyapunov stability theorem, the system states and weight values of NNs are proven to be uniformly ultimately bounded (UUB) stable. Finally, two examples are demonstrated to show the effectiveness and feasibility of the developed control method. Compared with static event-triggering mode, the simulation results show that the number of actuator updates in the DETC mechanism has been reduced by 55% and 69%, respectively.
Keywords: dynamic event-triggered control; integral reinforcement learning; adaptive dynamic programming; adaptive critic design; mixed zero-sum games (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/24/3916/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/24/3916/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:24:p:3916-:d:1542149
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().