Translating Intersectionality to Fair Machine Learning in Health Sciences
Elle Lett and
William La Cava
No gu7yh, SocArXiv from Center for Open Science
Abstract:
Machine learning (ML)-derived tools are rapidly being deployed as an additional input in the clinical decision-making process to optimize health interventions. However, ML models also risk propagating societal discrimination and exacerbating existing health inequities. The field of ML fairness has focused on developing approaches to mitigate bias in ML models. To date, the focus has been on the model fitting process, simplifying the processes of structural discrimination to definitions of model bias based on performance metrics. Here, we reframe the ML task through the lens of intersectionality, a Black feminist theoretical framework that contextualizes individuals in interacting systems of power and oppression, linking inquiry into measuring fairness to the pursuit of health justice. In doing so, we present intersectional ML fairness as a paradigm shift that moves from an emphasis on model metrics to an approach for ML that is centered around achieving more equitable health outcomes.
Date: 2023-02-27
New Economics Papers: this item is included in nep-big, nep-cmp and nep-hme
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://osf.io/download/63fd1bd940cecd079876f20c/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:socarx:gu7yh
DOI: 10.31219/osf.io/gu7yh
Access Statistics for this paper
More papers in SocArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().