Prebunking Elections Rumors: Artificial Intelligence Assisted Interventions Increase Confidence in American Elections
Mitchell Linegar,
Betsy Sinclair,
Sander van der Linden and
R. Michael Alvarez
Papers from arXiv.org
Abstract:
Large Language Models (LLMs) can assist in the prebunking of election misinformation. Using results from a preregistered two-wave experimental study of 4,293 U.S. registered voters conducted in August 2024, we show that LLM-assisted prebunking significantly reduced belief in specific election myths,with these effects persisting for at least one week. Confidence in election integrity was also increased post-treatment. Notably, the effect was consistent across partisan lines, even when controlling for demographic and attitudinal factors like conspiratorial thinking. LLM-assisted prebunking is a promising tool for rapidly responding to changing election misinformation narratives.
Date: 2024-10
New Economics Papers: this item is included in nep-ain, nep-dcm, nep-exp and nep-pol
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2410.19202 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2410.19202
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().