SSTMNet: Spectral-Spatio-Temporal and Multiscale Deep Network for EEG-Based Motor Imagery Classification
Albandari Alotaibi,
Muhammad Hussain () and
Hatim Aboalsamh
Additional contact information
Albandari Alotaibi: Department of Computer Science, King Saud University, Riyadh 11421, Saudi Arabia
Muhammad Hussain: Department of Computer Science, King Saud University, Riyadh 11421, Saudi Arabia
Hatim Aboalsamh: Department of Computer Science, King Saud University, Riyadh 11421, Saudi Arabia
Mathematics, 2025, vol. 13, issue 4, 1-33
Abstract:
Motor impairment is a critical health issue that restricts disabled people from living their lives normally and with comfort. Detecting motor imagery (MI) in electroencephalography (EEG) signals can make their lives easier. There has been a lot of work on detecting two or four different MI movements, which include bilateral, contralateral, and unilateral upper limb movements. However, there is little research on the challenging problem of detecting more than four motor imagery tasks and unilateral lower limb movements. As a solution to this problem, a spectral-spatio-temporal multiscale network (SSTMNet) has been introduced to detect six imagery tasks. It first performs a spectral analysis of an EEG trial and attends to the salient brain waves (rhythms) using an attention mechanism. Then, the temporal dependency across the entire EEG trial is worked out using a temporal dependency block, resulting in spectral-spatio-temporal features, which are passed to a multiscale block to learn multiscale spectral-–spatio-temporal features. Finally, these features are deeply analyzed by a sequential block to extract high-level features, which are used to detect an MI task. In addition, to deal with the small dataset problem for each MI task, the researchers introduce a data augmentation technique based on Fourier transform, which generates new EEG trials from EEG signals belonging to the same class in the frequency domain, with the idea that the coefficients of the same frequencies must be fused, ensuring label-preserving trials. SSTMNet is thoroughly evaluated on a public-domain benchmark dataset; it achieves an accuracy of 77.52% and an F1-score of 56.19%. t-SNE plots, confusion matrices, and ROC curves are presented, which show the effectiveness of SSTMNet. Furthermore, when it is trained on augmented data generated by the proposed data augmentation method, it results in a better performance, which validates the effectiveness of the proposed technique. The results indicate that its performance is comparable with the state-of-the-art methods. An analysis of the features learned by the model reveals that the block architectural design aids the model in distinguishing between multi-imagery tasks.
Keywords: brain–computer interface (BCI); EEG brain signal; motor imagery (MI); attention; multiscale; deep learning; convolutional neural network; Gated Recurrent Unit (GRU) (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/4/585/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/4/585/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:4:p:585-:d:1587967
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().