Journal article classification using abstracts: a comparison of classical and transformer-based machine learning methods
Cristina Arhiliuc (),
Raf Guns (),
Walter Daelemans () and
Tim C. E. Engels ()
Additional contact information
Cristina Arhiliuc: University of Antwerp
Raf Guns: University of Antwerp
Walter Daelemans: University of Antwerp
Tim C. E. Engels: University of Antwerp
Scientometrics, 2025, vol. 130, issue 1, No 12, 313-342
Abstract:
Abstract In this article we analyze the performance of existing models to classify journal articles into disciplines from a predefined classification scheme (i.e., supervised learning), based on their abstract. The first part analyzes scenarios with ample labeled data, comparing the performance of the Support Vector Machine algorithm (SVM) combined with TF-IDF and with SPECTER embeddings (Cohan et al. SPECTER: Document-level representation learning using citation-informed transformers, https://doi.org/10.48550/arXiv.2004.07180 , 2020) and Bidirectional Encoder Representations from Transformers (BERT) models. The second part employes Generative Pre-trained Transformer model 3.5 turbo (GPT-3.5-turbo) for the zero- and few-shot learning situations. Through the use of GPT-3.5-turbo we examine how different characterizations of disciplines (such as names, descriptions, and examples) affect the model’s ability to classify articles. The data set comprises journal articles published in 2022 and indexed in the Web of Science, with subject categories aligned to a modified version of the OECD Fields of Research and Development (FoRD) classification scheme. We find that BERT models surpass the SVM + TF-IDF baseline and SVM + SPECTER in all areas. For all disciplinary areas except Humanities, we observe minimal variation among the models fine-tuned on larger datasets, and greater variability with smaller training datasets. The GPT 3.5-turbo results show significant fluctuations across disciplines, influenced by the clarity of their definition and their distinctiveness as research topics compared to other fields. Although the two approaches are not directly comparable, we conclude that the classification models show promising results in their specific scenarios, with variations across disciplines.
Keywords: Paper-level classification; BERT classification; Classification OECD FoRD; WoS articles classification (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s11192-024-05217-7 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:scient:v:130:y:2025:i:1:d:10.1007_s11192-024-05217-7
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/11192
DOI: 10.1007/s11192-024-05217-7
Access Statistics for this article
Scientometrics is currently edited by Wolfgang Glänzel
More articles in Scientometrics from Springer, Akadémiai Kiadó
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().