EconPapers    
Economics at your fingertips  
 

Multi-Channel EEG Emotion Recognition Based on Parallel Transformer and 3D-Convolutional Neural Network

Jie Sun (), Xuan Wang, Kun Zhao, Siyuan Hao and Tianyu Wang
Additional contact information
Jie Sun: School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266033, China
Xuan Wang: School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266033, China
Kun Zhao: School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266033, China
Siyuan Hao: School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266033, China
Tianyu Wang: School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266033, China

Mathematics, 2022, vol. 10, issue 17, 1-15

Abstract: Due to its covert and real-time properties, electroencephalography (EEG) has long been the medium of choice for emotion identification research. Currently, EEG-based emotion recognition focuses on exploiting temporal, spatial, and spatiotemporal EEG data for emotion recognition. Due to the lack of consideration of both spatial and temporal aspects of EEG data, the accuracy of EEG emotion detection algorithms employing solely spatial or temporal variables is low. In addition, approaches that use spatiotemporal properties of EEG for emotion recognition take temporal and spatial characteristics of EEG into account; however, these methods extract temporal and spatial information directly from EEG data. Since there is no reconstruction of the EEG data format, the temporal and spatial properties of the EEG data cannot be extracted efficiently. To address the aforementioned issues, this research proposes a multi-channel EEG emotion identification model based on the parallel transformer and three-dimensional convolutional neural networks (3D-CNN). First, parallel channel EEG data and position reconstruction EEG sequence data are created separately. The temporal and spatial characteristics of EEG are then retrieved using transformer and 3D-CNN models. Finally, the features of the two parallel modules are combined to form the final features for emotion recognition. On the DEAP, Dreamer, and SEED databases, the technique achieved greater accuracy in emotion recognition than other methods. It demonstrates the efficiency of the strategy described in this paper.

Keywords: EEG; transformer; 3D-CNN; feature fusion (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/17/3131/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/17/3131/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:17:p:3131-:d:903572

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:17:p:3131-:d:903572