EconPapers    
Economics at your fingertips  
 

Cognitive morality and artificial intelligence (AI): a proposed classification of AI systems using Kohlberg's theory of cognitive ethics

Shailendra Kumar and Sanghamitra Choudhury

Technological Sustainability, 2023, vol. 2, issue 3, 259-273

Abstract: Purpose - The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality and human rights. The purpose of this manuscript is to explore possibility of developing cognitive morality in AI systems. Design/methodology/approach - This is explorative research. The manuscript investigates the likelihood of cognitive moral development in AI systems as well as potential pathways for such development. Concurrently, it proposes a novel idea for the characterization and development of ethically conscious and artificially intelligent robotic machines. Findings - This manuscript explores the possibility of categorizing AI machines according to the level of cognitive morality they embody, and while doing so, it makes use of Lawrence Kohlberg's study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI machines in accordance with the proposed concept, humans may assist in the development of an ideal AI creature that would be morally more responsible and act as moral agents, capable of meeting the demands of morality. Research limitations/implications - This manuscript has some restrictions because it focuses exclusively on Kohlberg's perspective. This theory is not flawless. Carol Gilligan, one of Kohlberg's former doctoral students, said that Kohlberg's proposal was unfair and sexist because it didn't take into account the views and experiences of women. Even if one follows the law, they may still be engaging in immoral behaviour, as Kohlberg argues, because laws and social norms are not perfect. This study makes it easier for future research in the field to look at how the ideas of people like Joao Freire and Carl Rogers can be used in AI systems. Originality/value - It is an original research that derives inspiration from the cognitive moral development theory of American Professor named Lawrence Kohlberg. The authors present a fresh way of thinking about how to classify AI systems, which should make it easier to give robots cognitive morality.

Keywords: Artificial intelligence; Cognitive moral development; Lawrence Kohlberg; Learning; Social robotics; Training; Moral dilemma (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.emerald.com/insight/content/doi/10.110 ... d&utm_campaign=repec (text/html)
https://www.emerald.com/insight/content/doi/10.110 ... d&utm_campaign=repec (application/pdf)
Access to full text is restricted to subscribers

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eme:techsp:techs-12-2022-0047

DOI: 10.1108/TECHS-12-2022-0047

Access Statistics for this article

Technological Sustainability is currently edited by Dr Shahla Seifi

More articles in Technological Sustainability from Emerald Group Publishing Limited
Bibliographic data for series maintained by Emerald Support ().

 
Page updated 2025-03-19
Handle: RePEc:eme:techsp:techs-12-2022-0047