EconPapers    
Economics at your fingertips  
 

Annotation-efficient deep learning for automatic medical image segmentation

Shanshan Wang (), Cheng Li (), Rongpin Wang, Zaiyi Liu, Meiyun Wang, Hongna Tan, Yaping Wu, Xinfeng Liu, Hui Sun, Rui Yang, Xin Liu, Jie Chen, Huihui Zhou, Ismail Ayed and Hairong Zheng ()
Additional contact information
Shanshan Wang: Chinese Academy of Sciences
Cheng Li: Chinese Academy of Sciences
Rongpin Wang: Guizhou Provincial People’s Hospital
Zaiyi Liu: Guangdong General Hospital, Guangdong Academy of Medical Sciences
Meiyun Wang: Henan Provincial People’s Hospital & the People’s Hospital of Zhengzhou University
Hongna Tan: Henan Provincial People’s Hospital & the People’s Hospital of Zhengzhou University
Yaping Wu: Henan Provincial People’s Hospital & the People’s Hospital of Zhengzhou University
Xinfeng Liu: Guizhou Provincial People’s Hospital
Hui Sun: Chinese Academy of Sciences
Rui Yang: Renmin Hospital of Wuhan University
Xin Liu: Chinese Academy of Sciences
Jie Chen: Peng Cheng Laboratory
Huihui Zhou: Chinese Academy of Sciences
Ismail Ayed: ETS Montreal
Hairong Zheng: Chinese Academy of Sciences

Nature Communications, 2021, vol. 12, issue 1, 1-13

Abstract: Abstract Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.

Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.nature.com/articles/s41467-021-26216-9 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:12:y:2021:i:1:d:10.1038_s41467-021-26216-9

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-021-26216-9

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:12:y:2021:i:1:d:10.1038_s41467-021-26216-9