Deep Reinforcement Learning-Based Impact Angle-Constrained Adaptive Guidance Law
Zhe Hu (),
Wenjun Yi and
Liang Xiao
Additional contact information
Zhe Hu: National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, Nanjing 210094, China
Wenjun Yi: National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, Nanjing 210094, China
Liang Xiao: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Mathematics, 2025, vol. 13, issue 6, 1-26
Abstract:
This study presents an advanced second-order sliding-mode guidance law with a terminal impact angle constraint, which ingeniously combines reinforcement learning algorithms with the nonsingular terminal sliding-mode control (NTSM) theory. This hybrid approach effectively mitigates the inherent chattering issue commonly associated with sliding-mode control while maintaining high levels of control system precision. We introduce a parameter to the super-twisting algorithm and subsequently improve an intelligent parameter-adaptive algorithm grounded in the Twin-Delayed Deep Deterministic Policy Gradient (TD3) framework. During the guidance phase, a pre-trained reinforcement learning model is employed to directly map the missile’s state variables to the optimal adaptive parameters, thereby significantly enhancing the guidance performance. Additionally, a generalized super-twisting extended state observer (GSTESO) is introduced for estimating and compensating the lumped uncertainty within the missile guidance system. This method obviates the necessity for prior information about the target’s maneuvers, enabling the proposed guidance law to intercept maneuvering targets with unknown acceleration. The finite-time stability of the closed-loop guidance system is confirmed using the Lyapunov stability criterion. Simulations demonstrate that our proposed guidance law not only meets a wide range of impact angle constraints but also attains higher interception accuracy and faster convergence rate and better overall performance compared to traditional NTSM and the super-twisting NTSM (ST-NTSM) guidance laws, The interception accuracy is less than 0.1 m, and the impact angle error is less than 0.01°.
Keywords: deep reinforcement learning; Lyapunov; guidance law; impact angle constraint (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/6/987/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/6/987/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:6:p:987-:d:1614200
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().