Some critical and ethical perspectives on the empirical turn of AI interpretability
Jean-Marie John-Mathews ()
Additional contact information
Jean-Marie John-Mathews: IMT-BS - MMS - Département Management, Marketing et Stratégie - TEM - Télécom Ecole de Management - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Post-Print from HAL
Abstract:
We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education of the person for whom the explication is intended. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations does not seem to be enough to resolve ethical issues. By following an STS pragmatist program, we highlight the role of non-human actors (such as computational paradigms, testing environments, etc.) in the formation of structural power relations, such as sexism. We then propose two scenarios for the future development of ethical AI: more external regulation, or more liberalization of AI explanations. These two opposite paths will play a major role in the future development of ethical AI.
Keywords: Artificial intelligence; Ethics Interpretability; Experimentation; Self-regulation; Sustainable Development Goals (search for similar items in EconPapers)
Date: 2022-01
New Economics Papers: this item is included in nep-ain and nep-cmp
Note: View the original document on HAL open archive server: https://hal.science/hal-03395823v1
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (6)
Published in Technological Forecasting and Social Change, 2022, 174, pp.121209. ⟨10.1016/j.techfore.2021.121209⟩
Downloads: (external link)
https://hal.science/hal-03395823v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03395823
DOI: 10.1016/j.techfore.2021.121209
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().