EconPapers    
Economics at your fingertips  
 

Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks

Huisheng Zhang, Dongpo Xu and Zhiping Wang

Discrete Dynamics in Nature and Society, 2010, vol. 2010, 1-27

Abstract:

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.

Date: 2010
References: Add references at CitEc
Citations:

Downloads: (external link)
http://downloads.hindawi.com/journals/DDNS/2010/829692.pdf (application/pdf)
http://downloads.hindawi.com/journals/DDNS/2010/829692.xml (text/xml)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hin:jnddns:829692

DOI: 10.1155/2010/829692

Access Statistics for this article

More articles in Discrete Dynamics in Nature and Society from Hindawi
Bibliographic data for series maintained by Mohamed Abdelhakeem ().

 
Page updated 2025-03-19
Handle: RePEc:hin:jnddns:829692