Humans Feel Too Special for Machines to Score Their Morals
Jean-François Bonnefon and
Zoe Purcell
No 22-1387, TSE Working Papers from Toulouse School of Economics (TSE)
Abstract:
Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.
JEL-codes: D91 (search for similar items in EconPapers)
Date: 2022-11-25
New Economics Papers: this item is included in nep-big and nep-hpe
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.tse-fr.eu/sites/default/files/TSE/docu ... 2022/wp_tse_1387.pdf Full Text (application/pdf)
Related works:
Working Paper: Humans Feel Too Special for Machines to Score Their Morals (2022) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:tse:wpaper:127527
Access Statistics for this paper
More papers in TSE Working Papers from Toulouse School of Economics (TSE) Contact information at EDIRC.
Bibliographic data for series maintained by ().