EconPapers    
Economics at your fingertips  
 

TEA: Topic Information based Extractive-Abstractive Fusion Model for Long Text Summary

Dunlu Peng () and Bo Yu ()
Additional contact information
Dunlu Peng: University of Shanghai for Science and Technology
Bo Yu: University of Shanghai for Science and Technology

Information Systems Frontiers, 2025, vol. 27, issue 1, No 20, 403 pages

Abstract: Abstract Sequence-to-sequence (seq2seq) models are widely used in abstractive text summarization tasks. The decoder of the traditional model leverages attention mechanism to generate the summary by taking the hidden state of each word as the complete semantic information of the original text. However, the hidden state of an output word only contains the semantic information of the words before and after it, which means that the semantic information of the original text is not completely captured. As a result, the generated summary lacks the important information of the original text, which affects the accuracy and readability of the abstract. To address this issue, in this paper, TEA, a topic-information-based extractive and abstractive fusion model, is proposed to generate the summary. The model consists of two modules, the BERT-based extractive module and the seq2seq-based abstractive module. The extractive module conducts sequential annotation on sentence level while the abstractive module uses the pointer-generator network to generate the summary. In the process of generating the summary, combined with the attention mechanism based on the topic information, the TextRank algorithm is employed to select N keywords, and the similarity between the keywords and the original text is calculated through the attention function, which is regarded as the weight of the topic encoding in the attention mechanism. Experimental results on Chinese dataset show that, compared with the state-of-the-art text summarization models, our proposed model effectively improves the compatibility between the generated text summary and the original text, and summarizes the content of the original text better. Further, the values of ROUGE-1, ROUGE-2 and ROUGE-L are increased by 2.07%, 3.94% and 3.53%.

Keywords: Abstractive model; Text summarization; Neural networks; Topic information (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10796-023-10442-1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:infosf:v:27:y:2025:i:1:d:10.1007_s10796-023-10442-1

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10796

DOI: 10.1007/s10796-023-10442-1

Access Statistics for this article

Information Systems Frontiers is currently edited by Ram Ramesh and Raghav Rao

More articles in Information Systems Frontiers from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-02
Handle: RePEc:spr:infosf:v:27:y:2025:i:1:d:10.1007_s10796-023-10442-1