EconPapers    
Economics at your fingertips  
 

In generative AI we trust: can chatbots effectively verify political information?

Elizaveta Kuznetsova (), Mykola Makhortykh, Victoria Vziatysheva, Martha Stolze, Ani Baghumyan and Aleksandra Urman
Additional contact information
Elizaveta Kuznetsova: Weizenbaum Institute for the Networked Society
Mykola Makhortykh: University of Bern
Victoria Vziatysheva: University of Bern
Martha Stolze: Weizenbaum Institute for the Networked Society
Ani Baghumyan: University of Bern
Aleksandra Urman: University of Zurich

Journal of Computational Social Science, 2025, vol. 8, issue 1, No 15, 31 pages

Abstract: Abstract This article presents a comparative analysis of the potential of two large language model (LLM)-based chatbots—ChatGPT and Bing Chat (recently rebranded to Microsoft Copilot)—to detect veracity of political information. We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ + -related debates. We compare how the chatbots respond in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, we explore chatbots’ ability to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. We also systematically test how such evaluations are influenced by source attribution. The results show high potential of ChatGPT for the baseline veracity evaluation task, with 72% of the cases evaluated in accordance with the baseline on average across languages without pre-training. Bing Chat evaluated 67% of the cases in accordance with the baseline. We observe significant disparities in how chatbots evaluate prompts in high- and low-resource languages and how they adapt their evaluations to political communication concepts with ChatGPT providing more nuanced outputs than Bing Chat. These findings highlight the potential of LLM-based chatbots in tackling different forms of false information in online environments, but also point to the substantial variation in terms of how such potential is realized due to specific factors (e.g. language of the prompt or the topic).

Keywords: AI audit; LLMs; Disinformation; Misinformation; Conspiracy theory (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s42001-024-00338-8 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:jcsosc:v:8:y:2025:i:1:d:10.1007_s42001-024-00338-8

Ordering information: This journal article can be ordered from
http://www.springer. ... iences/journal/42001

DOI: 10.1007/s42001-024-00338-8

Access Statistics for this article

Journal of Computational Social Science is currently edited by Takashi Kamihigashi

More articles in Journal of Computational Social Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:jcsosc:v:8:y:2025:i:1:d:10.1007_s42001-024-00338-8