EconPapers    
Economics at your fingertips  
 

Trustworthy AI for medical decisions: Adversarially robust and fair machine learning prediction for Parkinson’s disease

Junaid Muhammad, Mitra Ghergherehchi, Shiraz Ali, Ho Seung Song and Nasir Rahim

PLOS ONE, 2026, vol. 21, issue 2, 1-31

Abstract: Parkinson’s disease (PD) is a neurodegenerative disorder characterized by motor and non-motor symptoms, including tremor, rigidity, and postural instability. Machine learning (ML) models have shown promise for the diagnosis of PD; however, many existing approaches do not explicitly address fairness and robustness. As a result, these models can lead to biased outcomes across demographic groups and vulnerability to adversarial attacks. In this study, we used the Parkinson’s Progression Markers Initiative (PPMI) cohort, which includes clinical and demographic information from 1,084 participants spanning diverse age, sex, and racial groups. Our study addresses the key challenge of developing robust and equitable ML models to diagnose the progression of PD. We evaluated the performance of two fairness-optimized classifiers, namely, Random Forest (RF) and Decision Tree (DT). To evaluate model vulnerability, we applied adversarial techniques, specifically label leakage and data poisoning attacks, which simulate intentional or erroneous data alterations that can amplify biases and degrade accuracy. These adversarial manipulations substantially degraded model performance; specifically, DT accuracy declined by more than 10% between sensitive groups. The accuracy of the RF model decreased by 20%. Moreover, under attack, fairness metrics such as Statistical Parity Difference (SPD), which looks at differences in the chances of getting a positive prediction across demographic groups, and Equal Opportunity Difference (EOD) for differences in true positive rates between groups, both showed a decline. This pattern suggests that adversarial perturbations increased bias and widened performance disparities across demographic groups. Our results demonstrated that adversarial attacks increased the incidence of false positives and false negatives, thereby lowering the accuracy and fairness of the PD diagnostic predictions. These findings underscore the urgent need for robust and fairness-aware defenses in medical AI to mitigate racial, age, and gender disparities and ensure a reliable clinical decision-making process.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0342062 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 42062&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0342062

DOI: 10.1371/journal.pone.0342062

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2026-03-08
Handle: RePEc:plo:pone00:0342062