EconPapers    
Economics at your fingertips  
 

Moral Judgments in the Age of Artificial Intelligence

Yulia W. Sullivan () and Samuel Fosso Wamba ()
Additional contact information
Yulia W. Sullivan: Baylor University
Samuel Fosso Wamba: TBS Education

Journal of Business Ethics, 2022, vol. 178, issue 4, No 4, 917-943

Abstract: Abstract The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence (AI) system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency—attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience—attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI—mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.

Keywords: Artificial intelligence; Moral judgments; Mind perception; Perceived agency; Perceived experience; Perceived intentional harm (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)

Downloads: (external link)
http://link.springer.com/10.1007/s10551-022-05053-w Abstract (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:kap:jbuset:v:178:y:2022:i:4:d:10.1007_s10551-022-05053-w

Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10551/PS2

DOI: 10.1007/s10551-022-05053-w

Access Statistics for this article

Journal of Business Ethics is currently edited by Michelle Greenwood and R. Edward Freeman

More articles in Journal of Business Ethics from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:kap:jbuset:v:178:y:2022:i:4:d:10.1007_s10551-022-05053-w