Artificial intelligence—friend or foe in fake news campaigns
Węcel Krzysztof (),
Sawiński Marcin (),
Stróżyna Milena (),
Lewoniewski Włodzimierz (),
Księżniak Ewelina (),
Stolarski Piotr () and
Abramowicz Witold ()
Additional contact information
Węcel Krzysztof: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Sawiński Marcin: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Stróżyna Milena: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Lewoniewski Włodzimierz: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Księżniak Ewelina: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Stolarski Piotr: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Abramowicz Witold: Department of Information Systems, Poznań University of Economics and Business, al. Niepodległości 10, 61-875 Poznań, Poland
Economics and Business Review, 2023, vol. 9, issue 2, 41-70
Abstract:
In this paper the impact of large language models (LLM) on the fake news phenomenon is analysed. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text have already accumulated information on many facts thus one may assume they could be used for fact‐checking. Experiments were designed and conducted to verify how much LLM responses are aligned with actual fact‐checking verdicts. The research methodology consists of an experimental dataset preparation and a protocol for interacting with ChatGPT, currently the most sophisticated LLM. A research corpus was explicitly composed for the purpose of this work consisting of several thousand claims randomly selected from claim reviews published by fact‐ checkers. Findings include: it is difficult to align the respons‐ es of ChatGPT with explanations provided by fact‐checkers; prompts have significant impact on the bias of responses. ChatGPT at the current state can be used as a support in fact‐checking but cannot verify claims directly.
Keywords: artificial intelligence; large language models; fake news; fact‐checking (search for similar items in EconPapers)
JEL-codes: C45 C52 D83 L15 L86 (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://doi.org/10.18559/ebr.2023.2.736 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:vrs:ecobur:v:9:y:2023:i:2:p:41-70:n:7
DOI: 10.18559/ebr.2023.2.736
Access Statistics for this article
Economics and Business Review is currently edited by Tadeusz Kowalski
More articles in Economics and Business Review from Sciendo
Bibliographic data for series maintained by Peter Golla ().