Re-enacting machine learning practices to enquire into the moral issues they pose
Jean-Marie John-Mathews (jeanmarie.johnmathews@sciencespo.fr),
Robin de Mourat (robin.demourat@gmail.com),
Donato Ricci (donato.ricci@sciencespo.fr) and
Maxime Crépel (maxime.crepel@sciencespo.fr)
Additional contact information
Jean-Marie John-Mathews: Université Paris-Saclay, LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Robin de Mourat: médialab - médialab (Sciences Po) - Sciences Po - Sciences Po
Donato Ricci: médialab - médialab (Sciences Po) - Sciences Po - Sciences Po
Maxime Crépel: médialab - médialab (Sciences Po) - Sciences Po - Sciences Po
Post-Print from HAL
Abstract:
As the number of ethical incidents associated with Machine Learning (ML) algorithms increases worldwide, many actors are seeking to produce technical and legal tools to regulate the professional practices associated with these technologies. However these tools, generally grounded either on lofty principles or on technical approaches, often fail at addressing the complexity of the moral issues that ML-based systems are triggering. They are mostly based on a ‘principled' conception of morality where technical practices cannot be seen as more than mere means to be put at the service of more valuable moral ends. We argue that it is necessary to localise ethical debates within the complex entanglement of technical, legal and organisational entities from which ML moral issues stem. To expand the repertoire of the approaches through which these issues might be addressed, we designed and tested an interview protocol based on the re-enactment of data scientists' daily ML practices. We asked them to recall and describe the crafting and choosing of algorithms. Then, our protocol added two reflexivity-fostering elements to the situation: technical tools to assess algorithms' morality, based on incorporated ‘ethicality' indicators; and a series of staged objections to the aforementioned technical solutions to ML moral issues, made by factitious actors inspired by the data scientists' daily environment. We used this protocol to observe how ML data scientists uncover associations with multiple entities, to address moral issues from within the course of their technical practices. We thus reframe ML morality as an inquiry into the uncertain options that practitioners face in the heat of technical activities. We propose to institute moral enquiries both as a descriptive method serving to delineate alternative depictions of ML algorithms when they are affected by moral issues and as a transformative method to propagate situated critical technical practices within ML-building professional environments.
Date: 2024-02
References: Add references at CitEc
Citations:
Published in Convergence, 2024, 30 (1), pp.66-93. ⟨10.1177/13548565231174584⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04446604
DOI: 10.1177/13548565231174584
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD (hal@ccsd.cnrs.fr).