Machine Bias. How Do Generative Language Models Answer Opinion Polls?1
Julien Boelaert,
Samuel Coavoux,
Étienne Ollion,
Ivaylo Petev and
Patrick Präg
Sociological Methods & Research, 2025, vol. 54, issue 3, 1156-1196
Abstract:
Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these “synthetic users,†others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern “machine bias,†a concept we define, and whose consequences for LLM-based research we further explore.
Keywords: LLMs; bias; generative artificial intelligence; computational social sciences; machine learning; survey research (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/00491241251330582 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:somere:v:54:y:2025:i:3:p:1156-1196
DOI: 10.1177/00491241251330582
Access Statistics for this article
More articles in Sociological Methods & Research
Bibliographic data for series maintained by SAGE Publications ().