EconPapers    
Economics at your fingertips  
 

Enhancing fairness in disease prediction by optimizing multiple domain adversarial networks

Bin Li, Xiaoqian Jiang, Kai Zhang, Arif O Harmanci, Bradley Malin, Hongchang Gao, Xinghua Shi and the Alzheimer’s Disease Neuroimaging Initiative

PLOS Digital Health, 2025, vol. 4, issue 5, 1-17

Abstract: Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer’s Disease and Autism populations than other adversarial networks.Author summary: In the realm of healthcare, the promise of personalized medicine through predictive modeling has been shadowed by the challenge of biases, which can skew outcomes and exacerbate disparities. In this study, we delve into developing unbiased predictive machine learning models that serve diverse populations fairly. We propose the Multiple Domain Adversarial Neural Network (MDANN), a framework that addresses the complex issue of fairness in biomedical disease prediction by simultaneously considering multiple sensitive attributes. By integrating adversarial components to learn unbiased patterns and employing a minimax loss function that focuses on the Area Under the Receiver Operating Characteristic Curve (AUC), we enhance classification performance for underrepresented groups. Our experimentation with brain imaging data for Alzheimer’s Disease and Autism showcases MDANN’s capability to improve accuracy and fairness in predictions, marking a significant advancement toward equitable healthcare outcomes. Through this initiative, we demonstrate the potential of MDANN in biomedical applications and underscore the importance of fairness in the development of AI models for healthcare, paving the way for future research in AI ethics and fairness.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000830 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 00830&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0000830

DOI: 10.1371/journal.pdig.0000830

Access Statistics for this article

More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().

 
Page updated 2025-06-07
Handle: RePEc:plo:pdig00:0000830