Research can help to tackle AI-generated disinformation
Stefan Feuerriegel (),
Renée DiResta,
Josh A. Goldstein,
Srijan Kumar,
Philipp Lorenz-Spreen,
Michael Tomz and
Nicolas Pröllochs
Additional contact information
Stefan Feuerriegel: LMU Munich
Renée DiResta: Stanford University
Josh A. Goldstein: Georgetown University
Srijan Kumar: College of Computing at Georgia Institute of Technology
Philipp Lorenz-Spreen: Max Planck Institute for Human Development
Michael Tomz: Stanford University
Nicolas Pröllochs: JLU Giessen
Nature Human Behaviour, 2023, vol. 7, issue 11, 1818-1821
Abstract:
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
Date: 2023
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41562-023-01726-2 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nathum:v:7:y:2023:i:11:d:10.1038_s41562-023-01726-2
Ordering information: This journal article can be ordered from
https://www.nature.com/nathumbehav/
DOI: 10.1038/s41562-023-01726-2
Access Statistics for this article
Nature Human Behaviour is currently edited by Stavroula Kousta
More articles in Nature Human Behaviour from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().