Experimental evidence that delegating to intelligent machines can increase dishonest behaviour
Nils Köbis,
Zoe Rahwan,
Clara Bersch,
Tamer Ajaj,
Jean-François Bonnefon and
Iyad Rahwan
Additional contact information
Zoe Rahwan: Max Planck Institute for Human Development
No dnjgz, OSF Preprints from Center for Open Science
Abstract:
While artificial intelligence (AI) enables significant productivity gains from delegating tasks to machines, it can also facilitate the delegation of unethical behaviour. Here, we demonstrate this risk by having human principals instruct machine agents to perform a task with an incentive to cheat. Principals’ requests for cheating behaviour increased when the interface implicitly afforded unethical conduct: Machine agents programmed via supervised learning or goal specification evoked more cheating than those programmed with explicit rules. Cheating propensity was unaffected by whether delegation was mandatory or voluntary. Given the recent rise of large language model-based chatbots, we also explored delegation via natural language. Here, cheating requests did not vary between human and machine agents, but compliance diverged: When principals intended agents to cheat to the fullest extent, the majority of human agents did not comply, despite incentives to do so. In contrast, GPT4, a state-of-the-art machine agent, nearly fully complied. Our results highlight ethical risks in delegating tasks to intelligent machines, and suggest design principles and policy responses to mitigate such risks.
Date: 2024-10-04
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp and nep-exp
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://osf.io/download/66ff1479296d9f7e9a5cd2a3/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:osfxxx:dnjgz
DOI: 10.31219/osf.io/dnjgz
Access Statistics for this paper
More papers in OSF Preprints from Center for Open Science
Bibliographic data for series maintained by OSF ().