EconPapers    
Economics at your fingertips  
 

The Moral Psychology of Artificial Intelligence

Jean-François Bonnefon, Iyad Rahwan and Azim Shariff
Additional contact information
Iyad Rahwan: Unknown
Azim Shariff: Unknown

Post-Print from HAL

Abstract: Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Date: 2024-01
References: Add references at CitEc
Citations: View citations in EconPapers (3)

Published in Annual Review of Psychology, inPress, 75, ⟨10.1146/annurev-psych-030123-113559⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04220044

DOI: 10.1146/annurev-psych-030123-113559

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-22
Handle: RePEc:hal:journl:hal-04220044