EconPapers    
Economics at your fingertips  
 

Generative AI mitigates representation bias and improves model fairness through synthetic health data

Raffaele Marchesi, Nicolo Micheletti, Nicholas I-Hsien Kuo, Sebastiano Barbieri, Giuseppe Jurman and Venet Osmani

PLOS Computational Biology, 2025, vol. 21, issue 5, 1-23

Abstract: Representation bias in health data can lead to unfair decisions and compromise the generalisability of research findings. As a consequence, underrepresented subpopulations, such as those from specific ethnic backgrounds or genders, do not benefit equally from clinical discoveries. Several approaches have been developed to mitigate representation bias, ranging from simple resampling methods, such as SMOTE, to recent approaches based on generative adversarial networks (GAN). However, generating high-dimensional time-series synthetic health data remains a significant challenge. In response, we devised a novel architecture (CA-GAN) that synthesises authentic, high-dimensional time series data. CA-GAN outperforms state-of-the-art methods in a qualitative and a quantitative evaluation while avoiding mode collapse, a serious GAN failure. We perform evaluation using 7535 patients with hypotension and sepsis from two diverse, real-world clinical datasets. We show that synthetic data generated by our CA-GAN improves model fairness in Black patients as well as female patients when evaluated separately for each subpopulation. Furthermore, CA-GAN generates authentic data of the minority class while faithfully maintaining the original distribution of data, resulting in improved performance in a downstream predictive task.Author summary: Doctors and other healthcare professionals are increasingly using Artificial Intelligence (AI) to make better decisions about patients’ diagnosis, suggest optimal treatments, and estimate patients’ future health risks. These AI systems learn from existing health data which might not accurately reflect the health of everyone, particularly people from certain racial or ethnic groups, genders, or those with lower incomes. This can mean the AI doesn’t work as well for these groups and could even make existing health disparities worse. To address this, we have developed a purposely built AI software that can create synthetic patient data. Synthetic data created by our software mimics real patient data without actually copying them, protecting patients’ privacy. Using our synthetic data results in more representative dataset of all groups, and ensures that AI algorithms learn to be fairer for all patients.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013080 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 13080&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1013080

DOI: 10.1371/journal.pcbi.1013080

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-05-24
Handle: RePEc:plo:pcbi00:1013080