Application of modular and sparse complex networks in enhancing connectivity patterns of liquid state machines
Farideh Motaghian,
Soheila Nazari,
Reza Jafari and
Juan P. Dominguez-Morales
Chaos, Solitons & Fractals, 2025, vol. 191, issue C
Abstract:
Different neurons in biological brain systems can self-organize to create distinct neural circuits that enable a range of cognitive activities. Spiking neural networks (SNNs), which have higher biological and processing capacity than traditional neural networks, are one field of investigation for brain-like computing. A neural computational model with a recurrent network structure based on SNN is a liquid state machine (LSM). This research proposes a novel LSM structure, where the output layer comprises classification pyramid neurons, the intermediate layer is the liquid layer, and the input layer is generated from the retina model. In this research, the liquid layer is considered a modular complex network. The number of clusters in the liquid layer corresponds to the number of hidden patterns in the data, thus increasing the classification accuracy in the data. As this network is sparse, the computational time can be reduced, and the network learns faster than a fully connected network. Using this concept, we can expand the interior of the liquid layer in the LSM into some clusters rather than taking random connections into account as in other studies. Subsequently, an unsupervised Power-Spike Time Dependent Plasticity (Pow-STDP) learning technique is considered to optimize the synaptic connections between the liquid and output layers. The performance of the suggested LSM structure was very impressive compared to deep and spiking classification networks using three challenging datasets: MNIST, CIFAR-10, and CIFAR-100. Accuracy improvements over previous spiking networks were demonstrated by the accuracy of 98.1 % (6 training epochs), 95.4 % (6 training epochs), and 75.52 % (20 training epochs) that were obtained, respectively. The suggested network not only demonstrates more accuracy when compared to earlier spike-based learning techniques, but it also has a faster rate of convergence during the training phase. The benefits of the suggested network include unsupervised learning, minimal power consumption if used on neuromorphic devices, higher classification accuracy, and lower training epochs (higher training speed).
Keywords: Spiking neural network; Community detection; Liquid state machine; Pattern classification; Unsupervised learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077924014929
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:191:y:2025:i:c:s0960077924014929
DOI: 10.1016/j.chaos.2024.115940
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().