EconPapers    
Economics at your fingertips  
 

A Joint Learning Model to Extract Entities and Relations for Chinese Literature Based on Self-Attention

Li-Xin Liang, Lin Lin, E Lin, Wu-Shao Wen and Guo-Yan Huang
Additional contact information
Li-Xin Liang: College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
Lin Lin: College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
E Lin: School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China
Wu-Shao Wen: School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China
Guo-Yan Huang: School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China

Mathematics, 2022, vol. 10, issue 13, 1-20

Abstract: Extracting structured information from massive and heterogeneous text is a hot research topic in the field of natural language processing. It includes two key technologies: named entity recognition (NER) and relation extraction (RE). However, previous NER models consider less about the influence of mutual attention between words in the text on the prediction of entity labels, and there is less research on how to more fully extract sentence information for relational classification. In addition, previous research treats NER and RE as a pipeline of two separated tasks, which neglects the connection between them, and is mainly focused on the English corpus. In this paper, based on the self-attention mechanism, bidirectional long short-term memory (BiLSTM) neural network and conditional random field (CRF) model, we put forth a Chinese NER method based on BiLSTM-Self-Attention-CRF and a RE method based on BiLSTM-Multilevel-Attention in the field of Chinese literature. In particular, considering the relationship between these two tasks in terms of word vector and context feature representation in the neural network model, we put forth a joint learning method for NER and RE tasks based on the same underlying module, which jointly updates the parameters of the shared module during the training of these two tasks. For performance evaluation, we make use of the largest Chinese data set containing these two tasks. Experimental results show that the proposed independently trained NER and RE models achieve better performance than all previous methods, and our joint NER-RE training model outperforms the independently-trained NER and RE model.

Keywords: Chinese named entity recognition; relation extraction; joint learning; long short-term memory neural network; self-attention mechanism (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/13/2216/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/13/2216/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:13:p:2216-:d:847156

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:13:p:2216-:d:847156