Humans Feel Too Special for Machines to Score Their Morals
Zoe Purcell and
Jean-François Bonnefon
No 22-146, IAST Working Papers from Institute for Advanced Study in Toulouse (IAST)
Abstract:
Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.
Keywords: Artificial Intelligence; social credit scoring, ethics; consumer psychology (search for similar items in EconPapers)
Date: 2022-11
New Economics Papers: this item is included in nep-big and nep-cmp
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://iast.fr/pub/127526 null
https://www.iast.fr/sites/default/files/IAST/wp/wp_iast_146.pdf Full Text (application/pdf)
Related works:
Working Paper: Humans Feel Too Special for Machines to Score Their Morals (2022) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:tse:iastwp:127526
Access Statistics for this paper
More papers in IAST Working Papers from Institute for Advanced Study in Toulouse (IAST) Contact information at EDIRC.
Bibliographic data for series maintained by ().