Replication Report: Corrupted by Algorithms? How AI-generated And Human-written Advice Shape (Dis)Honesty
Lachlan Deer,
Adithya Krishna and
Lyla Zhang
No 212, I4R Discussion Paper Series from The Institute for Replication (I4R)
Abstract:
Leib et al. (2024) examine how artificial intelligence (AI) generated advice affects dishonesty compared to equivalent human advice in a laboratory experiment. In their preferred empirical specification, the authors report that dishonesty-promoting advice increases dishonest behavior by approximately 15% compared to a baseline without advice, while honesty-promoting advice has no significant effect. Additionally, they find that algorithmic transparency - disclosing whether advice comes from AI or humans - does not affect behavior. We computationally reproduce the main results of the paper using the same procedures and original data. Our results confirm the sign, magnitude, and statistical significance of the authors' reported estimates across each of their main findings. Additional robustness checks show that the significance of the results remains stable under alternative specifications and methodological choices.
Keywords: artificial intelligence; dishonesty; laboratory experiment; computational reproducibility (search for similar items in EconPapers)
JEL-codes: C91 D01 D91 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.econstor.eu/bitstream/10419/313185/1/I4R-DP212.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:zbw:i4rdps:212
Access Statistics for this paper
More papers in I4R Discussion Paper Series from The Institute for Replication (I4R)
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().