EconPapers    
Economics at your fingertips  
 

Bertweetro: Pre-Trained Language Models for Romanian Social Media Content

Neagu Dan Claudiu ()
Additional contact information
Neagu Dan Claudiu: Babes-Bolyai University, Romania

Studia Universitatis Babeș-Bolyai Oeconomica, 2025, vol. 70, issue 1, 83-111

Abstract: The introduction of Transformers, like BERT or RoBERTa, have revolutionized NLP due to their ability to better “understand” the meaning of texts. These models are created (pre-trained) in a self-supervised manner on large scale data to predict words in a sentence but can be adjusted (fine-tuned) for other specific NLP applications. Initially, these models were created using literary texts but very quickly the need to process social media content emerged. Social media texts have some problematic characteristics (they are short, informal, filled with typos, etc.) which means that a traditional BERT model will have problems when dealing with this type of input. For this reason, dedicated models need to be pre-trained on microblogging content and many such models have been developed in popular languages like English or Spanish. For under-represented languages, like Romanian, this is more difficult to achieve due to the lack of open-source resources. In this paper we present our efforts in pre-training from scratch 8 BERTweetRO models, based on RoBERTa architecture, with the help of a Romanian tweets corpus. To evaluate our models, we fine-tune them on 2 down-stream tasks, Sentiment Analysis (with 3 classes) and Topic Classification (with 26 classes), and compare them against Multilingual BERT plus a number of other popular classic and deep learning models. We include a commercial solution in this comparison and show that some BERTweetRO variants and almost all models trained on the translated data have a better accuracy than the commercial solution. Our best performing BERTweetRO variants place second after Multilingual BERT in most of our experiments, which is a good result considering that our Romanian corpus used for pre-training is relatively small, containing around 51,000 texts.

Keywords: machine learning; natural language processing; language models; transformers; text classification; under-resourced languages (search for similar items in EconPapers)
JEL-codes: C45 C55 C88 O33 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.2478/subboec-2025-0005 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:vrs:subboe:v:70:y:2025:i:1:p:83-111:n:1005

DOI: 10.2478/subboec-2025-0005

Access Statistics for this article

Studia Universitatis Babeș-Bolyai Oeconomica is currently edited by Dumitru Matis

More articles in Studia Universitatis Babeș-Bolyai Oeconomica from Sciendo
Bibliographic data for series maintained by Peter Golla ().

 
Page updated 2025-04-08
Handle: RePEc:vrs:subboe:v:70:y:2025:i:1:p:83-111:n:1005