LLMs as annotators: the effect of party cues on labelling decisions by large language models
Sebastián Vallejo Vera () and
Hunter Driggers
Additional contact information
Sebastián Vallejo Vera: University of Western Ontario
Hunter Driggers: University of Western Ontario
Humanities and Social Sciences Communications, 2025, vol. 12, issue 1, 1-11
Abstract:
Abstract Human coders can be biased. We test whether Large Language Models (LLMs) replicate those biases when used as text annotators. By replicating an experiment conducted by Ennser-Jedenastik and Meyer (2018), we find evidence that LLMs use political information, and specifically party cues, to evaluate political statements. Not only do LLMs use relevant information to contextualize whether a statement is positive, negative, or neutral based on the party cue, they also reflect the biases of the human-generated data upon which they have been trained. We also find that unlike humans, who are only biased when faced with statements from extreme parties, some LLMs exhibit significant bias even when prompted with statements from center-left and center-right parties. The implications of our findings are discussed in the conclusion.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1057/s41599-025-05834-4 Abstract (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pal:palcom:v:12:y:2025:i:1:d:10.1057_s41599-025-05834-4
Ordering information: This journal article can be ordered from
https://www.nature.com/palcomms/about
DOI: 10.1057/s41599-025-05834-4
Access Statistics for this article
More articles in Humanities and Social Sciences Communications from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().