EconPapers    
Economics at your fingertips  
 

Dual channel semantic enhancement-based convolutional neural networks model for text classification

Kangqi Zhang (), Xiaoyang Liu, Na Zhao (), Shan Liu () and Chaorong Li ()
Additional contact information
Kangqi Zhang: School of Computer Science and Engineering, Chongqing University of Technology, Chongqing 400054, P. R. China
Xiaoyang Liu: School of Computer Science and Engineering, Chongqing University of Technology, Chongqing 400054, P. R. China
Na Zhao: National Pilot School of Software, Yunnan University, Kunming 650500, P. R. China
Shan Liu: Department of Intelligent Science, School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, P. R. China
Chaorong Li: School of Computer Science and Technology (School of Artificial Intelligence), Yibin University, Yibin 644000, P. R. China

International Journal of Modern Physics C (IJMPC), 2025, vol. 36, issue 10, 1-27

Abstract: Text classification is an essential research aspect in the field of natural language processing. The shortcomings of insufficient capture of long-range semantic information and poor model generalization in text classification tasks, to surmount the above-mentioned problems, a two-channel convolutional module with weighted attentional utility and a semantic enhancement module for textual features are introduced. In this paper, we proposed a Dual-channel Semantic enhancement-based Convolutional Neural Network text classification model (DcSeCNN). First, Atlas convolution is introduced to collaborate with TextCNN convolution for forming dual-channel convolution and acquiring local and global semantic information of sentence-level text. Second, to better exploit the textual information flow, a weighted average attention mechanism module is used to enhance the features of the two-channel weighting. Finally, the original text vectors are semantically augmented with the original discourse and fused into a two-channel augmented features map, respectively, tuned by the learning rate attenuation coefficient λ and dropout parameters to constitute a DcSeCNN model. Extensive experimental comparative analyses with seven baseline models on six data sets, including MR, R8, R52, TREC, IMDBR, and THUCNews, have shown the advantages of the scheme in terms of enhanced semantic message extraction, improved classification and model generality.

Keywords: Text classification; TextCNN; two-channel convolution; semantic enhancement; weighted average attention (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0129183124420129
Access to full text is restricted to subscribers

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wsi:ijmpcx:v:36:y:2025:i:10:n:s0129183124420129

Ordering information: This journal article can be ordered from

DOI: 10.1142/S0129183124420129

Access Statistics for this article

International Journal of Modern Physics C (IJMPC) is currently edited by H. J. Herrmann

More articles in International Journal of Modern Physics C (IJMPC) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().

 
Page updated 2025-06-07
Handle: RePEc:wsi:ijmpcx:v:36:y:2025:i:10:n:s0129183124420129