EconPapers    
Economics at your fingertips  
 

Treating differently or equally: A study exploring attitudes towards AI moral advisors

Yiming Liu and Tianhong Wang

Technology in Society, 2025, vol. 82, issue C

Abstract: Artificial intelligence (AI) technology has evolved from serving primarily as a decision-maker in the past to increasingly taking on the role of an advisor. However, contemporary attitudes toward AI applications in moral decision-making remain unclear. In Study 1, we explored whether there is a difference in attitudes towards human and AI moral advisors when both are defined in the context of a one-time decision-making scenario. Studies 2a and 2b, with the goal of achieving higher ecological validity, optimized decision-making methods and scenarios, respectively. We obtained consistent results, indicating that people equally trust the advice of AI moral advisors and human moral advisors. When it comes to assigning responsibility after a decision, individuals assign responsibility equally to both AI and human advisors.

Keywords: Artificial intelligence; Moral advisor; Ethical decision-making; Trust; Assignment of responsibilities (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0160791X25000521
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:teinso:v:82:y:2025:i:c:s0160791x25000521

DOI: 10.1016/j.techsoc.2025.102862

Access Statistics for this article

Technology in Society is currently edited by Charla Griffy-Brown

More articles in Technology in Society from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-06-17
Handle: RePEc:eee:teinso:v:82:y:2025:i:c:s0160791x25000521