Do people trust humans more than ChatGPT?
Joy Buchanan and
William Hickman
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), 2024, vol. 112, issue C
Abstract:
We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.
Keywords: Artificial intelligence; Machine learning; Trust; Belief; Experiments (search for similar items in EconPapers)
JEL-codes: C91 D8 O33 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S2214804324000776
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:soceco:v:112:y:2024:i:c:s2214804324000776
DOI: 10.1016/j.socec.2024.102239
Access Statistics for this article
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) is currently edited by Pablo Brañas Garza
More articles in Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) from Elsevier
Bibliographic data for series maintained by Catherine Liu ().