Strategic Delegation of Moral Decisions to AI
Stephan Tontrup and
Christopher Jon Sprigman
EconStor Preprints from ZBW - Leibniz Information Centre for Economics
Abstract:
Our study examines how individuals perceive the moral agency of artificial intelligence (AI), and, specifically, whether individuals believe that by involving AI as their agent, they can offload to the AI some of their responsibility for a morally sensitive decision. Existing literature shows that people often delegate self-interested decisions to human agents to mitigate their moral responsibility for unethical outcomes. This research explores whether individuals will similarly delegate such decisions to AI to reduce moral costs. Our study shows that many individuals perceive the AI as capable of assuming moral responsibility. These individuals delegate to the AI and delegating leads them to act more assertively in their self-interest while experiencing lower moral costs. Participants (hereinafter, "Allocators") took part in a dictator game, allocating a $10 endowment between themselves and a Recipient. In the experimental treatment, Allocators could involve ChatGPT in their allocation decision, at the cost of incurring added time to complete the experiment. When engaged, the AI executed the transfer by informing the Recipient of a necessary payment code. Around 35% of Allocators chose to involve the AI, despite the opportunity costs of a much-prolonged process. To isolate the effect of the AI's perceived responsibility, a control condition replaced the AI with a non-agentive computer program, while maintaining identical decision protocols. This design controlled for factors such as social distance and substantive influence by the AI. Allocators who involved the AI transferred significantly less money to the Recipient, suggesting that delegating the transfer to AI reduced the moral costs associated with self-interested decisions. This is supported by the fact that prosocial individuals, who face higher moral costs from violating a norm and thus would without delegation transfer more than proself individuals, were significantly more likely to involve the AI. A responsibility measure indicates that Allocators who attributed more responsibility for the transfer to the AI were also more likely to involve the AI. The study suggests that AI systems provide human actors with an easily accessible, low-cost, and hard-to-monitor means of offloading personal moral responsibility, highlighting the need to consider in AI regulation not only the inherent risks of AI output, but also how AI's perceived moral agency can influence human behavior and ethical accountability in human-AI interaction.
Keywords: AI; Delegation; Moral Outsourcing; Prosociality (search for similar items in EconPapers)
JEL-codes: C91 D63 D91 O33 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.econstor.eu/bitstream/10419/335206/1/AI-study.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:zbw:esprep:335206
DOI: 10.2139/ssrn.5696827
Access Statistics for this paper
More papers in EconStor Preprints from ZBW - Leibniz Information Centre for Economics Contact information at EDIRC.
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().