Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks
Dingping Wu,
Changyou Wang () and
Tao Jiang
Additional contact information
Dingping Wu: Division of Mathematics, Sichuan University Jingjiang College, Meishan 620860, China
Changyou Wang: College of Applied Mathematics, Chengdu University of Information Technology, Chengdu 610225, China
Tao Jiang: School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu 611137, China
Mathematics, 2025, vol. 13, issue 7, 1-14
Abstract:
Neural networks, mimicking the structural and functional aspects of the human brain, have found widespread applications in diverse fields such as pattern recognition, control systems, and information processing. A critical phenomenon in these systems is synchronization, where multiple neurons or neural networks harmonize their dynamic behaviors to a common rhythm, contributing significantly to their efficient operation. However, the inherent complexity and nonlinearity of neural networks pose significant challenges in understanding and controlling this synchronization process. In this paper, we focus on the synchronization of a class of fractional-order, delayed, and non-autonomous neural networks. Fractional-order dynamics, characterized by their ability to capture memory effects and non-local interactions, introduce additional layers of complexity to the synchronization problem. Time delays, which are ubiquitous in real-world systems, further complicate the analysis by introducing temporal asynchrony among the neurons. To address these challenges, we propose a straightforward yet powerful global synchronization framework. Our approach leverages novel state feedback control to derive an analytical formula for the synchronization controller. This controller is designed to adjust the states of the neural networks in such a way that they converge to a common trajectory, achieving synchronization. To establish the asymptotic stability of the error system, which measures the deviation between the states of the neural networks, we construct a Lyapunov function. This function provides a scalar measure of the system’s energy, and by showing that this measure decreases over time, we demonstrate the stability of the synchronized state. Our analysis yields sufficient conditions that guarantee global synchronization in fractional-order neural networks with time delays and Caputo derivatives. These conditions provide a clear roadmap for designing neural networks that exhibit robust and stable synchronization properties. To validate our theoretical findings, we present numerical simulations that demonstrate the effectiveness of our proposed approach. The simulations show that, under the derived conditions, the neural networks successfully synchronize, confirming the practical applicability of our framework.
Keywords: synchronization; delayed; Caputo derivative; neural network; non-autonomous (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/7/1048/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/7/1048/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:7:p:1048-:d:1618971
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().