Chaos time-series prediction based on an improved recursive Levenberg–Marquardt algorithm
Xiancheng Shi,
Yucheng Feng,
Jinsong Zeng and
Kefu Chen
Chaos, Solitons & Fractals, 2017, vol. 100, issue C, 57-61
Abstract:
An improved recursive Levenberg–Marquardt algorithm (RLM) is proposed to more efficiently train neural networks. The error criterion of the RLM algorithm was modified to reduce the impact of the forgetting factor on the convergence of the algorithm. The remedy to apply the matrix inversion lemma in the RLM algorithm was extended from one row to multiple rows to improve the success rate of the convergence; after that, the adjustment strategy was modified based on the extended remedy. Finally, the performance of this algorithm was tested on two chaotic systems. The results show improved convergence.
Keywords: Recursive algorithm; Levenberg–Marquardt; On-line learning; Neural networks (search for similar items in EconPapers)
Date: 2017
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077917301686
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:100:y:2017:i:c:p:57-61
DOI: 10.1016/j.chaos.2017.04.032
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().