How user language affects conflict fatality estimates in ChatGPT
Christoph Valentin Steinert and
Daniel Kazenwadel
Additional contact information
Christoph Valentin Steinert: Department of Political Science, University of Zurich, Switzerland
Daniel Kazenwadel: Department of Physics, University of Konstanz, Germany
Journal of Peace Research, 2025, vol. 62, issue 4, 1128-1143
Abstract:
OpenAI’s ChatGPT language model has gained popularity as a powerful tool for problem-solving and information retrieval. However, concerns arise about the reproduction of biases present in the language-specific training data. In this study, we address this issue in the context of the Israeli–Palestinian and Turkish–Kurdish conflicts. Using GPT-3.5, we employed an automated query procedure to inquire about casualties in specific airstrikes, in both Hebrew and Arabic for the former conflict and Turkish and Kurdish for the latter. Our analysis reveals that GPT-3.5 provides 34 ± 11% lower fatality estimates when queried in the language of the attacker than in the language of the targeted group. Evasive answers denying the existence of such attacks further increase the discrepancy. A simplified analysis on the current GPT-4 model shows the same trends. To explain the origin of the bias, we conducted a systematic media content analysis of Arabic news sources. The media analysis suggests that the large-language model fails to link specific attacks to the corresponding fatality numbers reported in the Arabic news. Due to its reliance on co-occurring words, the large-language model may provide death tolls from different attacks with greater news impact or cumulative death counts that are prevalent in the training data. Given that large-language models may shape information dissemination in the future, the language bias identified in our study has the potential to amplify existing biases along linguistic dyads and contribute to information bubbles.
Keywords: armed conflict; artificial intelligence; ChatGPT; conflict fatalities; indiscriminate violence; large-language models (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/00223433241279381 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:joupea:v:62:y:2025:i:4:p:1128-1143
DOI: 10.1177/00223433241279381
Access Statistics for this article
More articles in Journal of Peace Research from Peace Research Institute Oslo
Bibliographic data for series maintained by SAGE Publications ().