A concept-based interpretable model for the diagnosis of choroid neoplasias using multimodal data
Yifan Wu,
Yang Liu,
Yue Yang,
Michael S. Yao,
Wenli Yang,
Xuehui Shi,
Lihong Yang,
Dongjun Li,
Yueming Liu,
Shiyi Yin,
Chunyan Lei,
Meixia Zhang,
James C. Gee,
Xuan Yang (),
Wenbin Wei () and
Shi Gu ()
Additional contact information
Yifan Wu: University of Pennsylvania
Yang Liu: University of Electronic Science and Technology of China
Yue Yang: University of Pennsylvania
Michael S. Yao: University of Pennsylvania
Wenli Yang: Capital Medical University
Xuehui Shi: Capital Medical University
Lihong Yang: Capital Medical University
Dongjun Li: Capital Medical University
Yueming Liu: Capital Medical University
Shiyi Yin: Capital Medical University
Chunyan Lei: Sichuan University
Meixia Zhang: Sichuan University
James C. Gee: University of Pennsylvania
Xuan Yang: Capital Medical University
Wenbin Wei: Capital Medical University
Shi Gu: University of Electronic Science and Technology of China
Nature Communications, 2025, vol. 16, issue 1, 1-14
Abstract:
Abstract Diagnosing rare diseases remains a critical challenge in clinical practice, often requiring specialist expertise. Despite the promising potential of machine learning, the scarcity of data on rare diseases and the need for interpretable, reliable artificial intelligence (AI) models complicates development. This study introduces a multimodal concept-based interpretable model tailored to distinguish uveal melanoma (0.4-0.6 per million in Asians) from hemangioma and metastatic carcinoma following the clinical practice. We collected a comprehensive dataset on Asians to date on choroid neoplasm imaging with radiological reports, encompassing over 750 patients from 2013 to 2019. Our model integrates domain expert insights from radiological reports and differentiates between three types of choroidal tumors, achieving an F1 score of 0.91. This performance not only matches senior ophthalmologists but also improves the diagnostic accuracy of less experienced clinicians by 42%. The results underscore the potential of interpretable AI to enhance rare disease diagnosis and pave the way for future advancements in medical AI.
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-58801-7 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58801-7
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-58801-7
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().