Human Dignity as a Tool for Differentiating Between Maleficent and Beneficent Uses of Artificial Intelligence
Brett G. Scharffs and
Alexandra Brown
The Review of Faith & International Affairs, 2025, vol. 23, issue 3, 5-15
Abstract:
Artificial intelligence will be used for both good and ill, and we, as designers and users, and as governments regulating AI, should use the concept of human dignity as the key conceptual mechanism for differentiating between socially positive and socially negative applications of AI. When evaluating any AI system or use, we should continually ask: Is this technology dignity-enhancing, dignity-degrading, or dignity-neutral? This human dignity test provides a moral compass and evaluative framework to help us navigate the rapidly changing world where AI will constantly alter the moral terrain that we face.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1080/15570274.2025.2531652 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:rfiaxx:v:23:y:2025:i:3:p:5-15
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/rfia20
DOI: 10.1080/15570274.2025.2531652
Access Statistics for this article
The Review of Faith & International Affairs is currently edited by Dennis R. Hoover
More articles in The Review of Faith & International Affairs from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().