EconPapers    
Economics at your fingertips  
 

Prescribed convergence analysis of recurrent neural networks with parameter variations

Gang Bao and Zhigang Zeng

Mathematics and Computers in Simulation (MATCOM), 2021, vol. 182, issue C, 858-870

Abstract: Recurrent neural networks are designed to be convergent to the desired equilibrium point for their applications. Network parameter variations lead network states to other different points. So this paper discusses the prescribed convergence problem of recurrent neural networks with parameter variations. Firstly, we recurrent neural networks’ equilibrium point variation principles when parameters are changed. Then we design one track controller to make recurrent neural networks be convergent to the prescribed equilibrium for known parameter variations. Next, we present one adaptive controller to lead network states to the desired equilibrium for unknown parameter variations. At last, two examples are given for validating the presented methods.

Keywords: Convergence; Stability; Recurrent neural network; Equilibrium point (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0378475420304651
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:matcom:v:182:y:2021:i:c:p:858-870

DOI: 10.1016/j.matcom.2020.12.010

Access Statistics for this article

Mathematics and Computers in Simulation (MATCOM) is currently edited by Robert Beauwens

More articles in Mathematics and Computers in Simulation (MATCOM) from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:matcom:v:182:y:2021:i:c:p:858-870