EconPapers    
Economics at your fingertips  
 

Towards Reliable Baselines for Document-Level Sentiment Analysis in the Czech and Slovak Languages

Ján Mojžiš (), Peter Krammer, Marcel Kvassay, Lenka Skovajsová and Ladislav Hluchý
Additional contact information
Ján Mojžiš: Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia
Peter Krammer: Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia
Marcel Kvassay: Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia
Lenka Skovajsová: Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia
Ladislav Hluchý: Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia

Future Internet, 2022, vol. 14, issue 10, 1-23

Abstract: This article helps establish reliable baselines for document-level sentiment analysis in highly inflected languages like Czech and Slovak. We revisit an earlier study representing the first comprehensive formulation of such baselines in Czech and show that some of its reported results need to be significantly revised. More specifically, we show that its online product review dataset contained more than 18% of non-trivial duplicates, which incorrectly inflated its macro F1-measure results by more than 19 percentage points. We also establish that part-of-speech-related features have no damaging effect on machine learning algorithms (contrary to the claim made in the study) and rehabilitate the Chi-squared metric for feature selection as being on par with the best performing metrics such as Information Gain. We demonstrate that in feature selection experiments with Information Gain and Chi-squared metrics, the top 10% of ranked unigram and bigram features suffice for the best results regarding online product and movie reviews, while the top 5% of ranked unigram and bigram features are optimal for the Facebook dataset. Finally, we reiterate an important but often ignored warning by George Forman and Martin Scholz that different possible ways of averaging the F1-measure in cross-validation studies of highly unbalanced datasets can lead to results differing by more than 10 percentage points. This can invalidate the comparisons of F1-measure results across different studies if incompatible ways of averaging F1 are used.

Keywords: document-level sentiment analysis; natural language processing; machine learning; highly inflected languages; Czech language; Slovak language; baseline correction; duplicate records (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/14/10/300/pdf (application/pdf)
https://www.mdpi.com/1999-5903/14/10/300/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:14:y:2022:i:10:p:300-:d:946575

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:14:y:2022:i:10:p:300-:d:946575