EconPapers    
Economics at your fingertips  
 

Convergence analysis of Chauvin’s PCA learning algorithm with a constant learning rate

Jian Cheng Lv and Zhang Yi

Chaos, Solitons & Fractals, 2007, vol. 32, issue 4, 1562-1571

Abstract: The convergence of Chauvin’s PCA learning algorithm with a constant learning rate is studied in this paper by using a DDT method (deterministic discrete-time system method). Different from the DCT method (deterministic continuous-time system method), the DDT method does not require that the learning rate converges to zero. An invariant set of Chauvin’s algorithm with a constant learning rate is obtained so that the non-divergence of this algorithm can be guaranteed. Rigorous mathematic proofs are provided to prove the local convergence of this algorithm.

Date: 2007
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077905012051
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:32:y:2007:i:4:p:1562-1571

DOI: 10.1016/j.chaos.2005.12.007

Access Statistics for this article

Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros

More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().

 
Page updated 2025-03-19
Handle: RePEc:eee:chsofr:v:32:y:2007:i:4:p:1562-1571