A quality of mercy is not trained: the imagined vs. the practiced in healthcare process-specialized AI development
Anand Bhardwaj and
Samer Faraj
Papers from arXiv.org
Abstract:
In high stakes organizational contexts like healthcare, artificial intelligence (AI) systems are increasingly being designed to augment complex coordination tasks. This paper investigates how the ethical stakes of such systems are shaped by their epistemic framings: what aspects of work they represent, and what they exclude. Drawing on an embedded study of AI development for operating room (OR) scheduling at a Canadian hospital, we compare scheduling-as-imagined in the AI design process: rule-bound, predictable, and surgeon-centric, with scheduling-as-practiced as a fluid, patient-facing coordination process involving ethical discretion. We show how early representational decisions narrowed what the AI could support, resulting in epistemic foreclosure: the premature exclusion of key ethical dimensions from system design. Our findings surface the moral consequences of abstraction and call for a more situated approach to designing healthcare process-specialized artificial intelligence systems.
Date: 2025-10
New Economics Papers: this item is included in nep-hea and nep-hme
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2510.21843 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2510.21843
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().