The Functional Morality of Robots
Linda Johansson
Additional contact information
Linda Johansson: Royal Institute of Technology, Sweden
International Journal of Technoethics (IJT), 2010, vol. 1, issue 4, 65-73
Abstract:
It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.
Date: 2010
References: Add references at CitEc
Citations:
Downloads: (external link)
http://services.igi-global.com/resolvedoi/resolve. ... .4018/jte.2010100105 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:igg:jt0000:v:1:y:2010:i:4:p:65-73
Access Statistics for this article
International Journal of Technoethics (IJT) is currently edited by Steven Umbrello
More articles in International Journal of Technoethics (IJT) from IGI Global
Bibliographic data for series maintained by Journal Editor ().