Ethnic and gender bias in Large Language Models across contexts
Daniel Capistrano,
Mathew Creighton and
Mariña Fernández-Reino
Additional contact information
Daniel Capistrano: University College Dublin
Mathew Creighton: University College Dublin
No 9zusq_v1, SocArXiv from Center for Open Science
Abstract:
In this study, we assessed if Large Language Models provided biased answers when prompted to assist with the evaluation of requests made by individuals with different ethnic backgrounds and gender. We emulated an experimental procedure traditionally used in correspondence studies to test discrimination in social settings. The preference given as recommendation from the language models were compared across groups revealing a significant bias against names associated with ethnic minorities, particularly in the housing domain. However, the magnitude of this ethnic bias as well as differences by gender depended on the context mentioned in the prompt to the model. Finally, directing the model to take into consideration regulatory provisions on Artificial Intelligence or potential gender and ethnic discrimination does not seem to mitigate the observed bias between groups.
Date: 2025-07-06
New Economics Papers: this item is included in nep-ain, nep-big and nep-exp
References: Add references at CitEc
Citations:
Downloads: (external link)
https://osf.io/download/686bfe7576231609669fee54/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:socarx:9zusq_v1
DOI: 10.31219/osf.io/9zusq_v1
Access Statistics for this paper
More papers in SocArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().