Efficient Training of Deep Spiking Neural Networks Using a Modified Learning Rate Scheduler
Sung-Hyun Cha and
Dong-Sun Kim ()
Additional contact information
Sung-Hyun Cha: Department of Semiconductor Systems Engineering, Sejong University, Seoul 05006, Republic of Korea
Dong-Sun Kim: Department of Semiconductor Systems Engineering, Sejong University, Seoul 05006, Republic of Korea
Mathematics, 2025, vol. 13, issue 8, 1-16
Abstract:
Deep neural networks (DNNs) have achieved high accuracy in various applications, but with the rapid growth of AI and the increasing scale and complexity of datasets, their computational cost and power consumption have become even more significant challenges. Spiking neural networks (SNNs), inspired by biological neurons, offer an energy-efficient alternative by using spike-based information processing. However, training SNNs is difficult due to the non-differentiability of their activation function and the challenges in constructing deep architectures. This study addresses these issues by integrating DNN-like backpropagation into SNNs using a supervised learning approach. A surrogate gradient descent based on the arctangent function is applied to approximate the non-differentiable activation function, enabling stable gradient-based learning. The study also explores the interplay between the spatial domain (layer-wise propagation) and the temporal domain (time step), ensuring proper gradient propagation using the chain rule. Additionally, mini-batch training, Adam optimization, and layer normalization are incorporated to improve training efficiency and mitigate gradient vanishing. A softmax-based probability representation and cross-entropy loss function are used to optimize classification performance. Along with these techniques, a deep SNN was designed to converge to the optimal point faster than other models in the early stages of training by utilizing a modified learning rate scheduler. The proposed learning method allows deep SNNs to achieve competitive accuracy while maintaining their inherent low-power characteristics. These findings contribute to making SNNs more practical for machine learning applications by combining the advantages of deep learning and biologically inspired computing. In summary, this study contributes to the field by analyzing and adapting deep learning techniques—such as dropout, layer normalization, mini-batch training, and Adam optimization—to the spiking domain, and by proposing a novel learning rate scheduler that enables faster convergence during early training phases with fewer epochs.
Keywords: spiking neural networks; deep learning; learning rate scheduler; gradient descent; neuromorphic (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/8/1361/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/8/1361/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:8:p:1361-:d:1639382
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().