EconPapers    
Economics at your fingertips  
 

General lightweight framework for vision foundation model supporting multi-task and multi-center medical image analysis

Senliang Lu, Yehang Chen, Yuan Chen, Peijun Li, Junqi Sun, Changye Zheng, Yujian Zou, Bo Liang, Mingwei Li, Qinggeng Jin, Enming Cui, Wansheng Long () and Bao Feng ()
Additional contact information
Senliang Lu: Guilin University of Aerospace Technology
Yehang Chen: Guilin University of Aerospace Technology
Yuan Chen: Jiangmen Central Hospital
Peijun Li: Jiangmen Central Hospital
Junqi Sun: Yuebei People’s Hospital
Changye Zheng: Affiliated Dongguan Hospital, Southern Medical University
Yujian Zou: Affiliated Dongguan Hospital, Southern Medical University
Bo Liang: Maoming People’s Hospital
Mingwei Li: Kaiping Central Hospital
Qinggeng Jin: Guangxi University
Enming Cui: Jiangmen Central Hospital
Wansheng Long: Jiangmen Central Hospital
Bao Feng: Guilin University of Aerospace Technology

Nature Communications, 2025, vol. 16, issue 1, 1-16

Abstract: Abstract The foundation model, trained on extensive and diverse datasets, has shown strong performance across numerous downstream tasks. Nevertheless, its application in the medical domain is significantly hindered by issues such as data volume, heterogeneity, and privacy concerns. Therefore, we propose the Vision Foundation Model General Lightweight (VFMGL) framework, which facilitates the decentralized construction of expert clinical models for various medical tasks. The VFMGL framework transfers general knowledge from large-parameter vision foundation models to construct lightweight, robust expert clinical models tailored to specific medical tasks. Through extensive experiments and analyses across a range of medical tasks and scenarios, we demonstrate that VFMGL achieves superior performance in both medical image classification and segmentation tasks, effectively managing the challenges posed by data heterogeneity. These results underscore the potential of VFMGL in advancing the efficacy and reliability of AI-driven medical diagnostics.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-57427-z Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-57427-z

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-57427-z

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-57427-z