Deep learning approach for screening neonatal cerebral lesions on ultrasound in China
Zhouqin Lin,
Haoming Zhang,
Xingxing Duan,
Yan Bai,
Jian Wang,
Qianhong Liang,
Jingran Zhou,
Fusui Xie,
Zhen Shentu,
Ruobing Huang,
Yayan Chen,
Hongkui Yu,
Zongjie Weng,
Dong Ni (),
Lei Liu () and
Luyao Zhou ()
Additional contact information
Zhouqin Lin: Shenzhen Children’s Hospital
Haoming Zhang: Shenzhen University
Xingxing Duan: Changsha Hospital for Maternal and Child Health Care
Yan Bai: Sichuan Provincial Women’s and Children’s Hospital/The Affiliated Women’s and Children’s Hospital of Chengdu Medical College
Jian Wang: Shenzhen University
Qianhong Liang: Panyu Maternal and Child Care Service Centre of Guangzhou
Jingran Zhou: Shenzhen Children’s Hospital
Fusui Xie: Shenzhen Children’s Hospital
Zhen Shentu: Shenzhen Pediatrics Institute of Shantou University Medical College
Ruobing Huang: Shenzhen University
Yayan Chen: Ultrasound Department of Longhua District Maternal and Child Healthcare Hospital
Hongkui Yu: Shenzhen Baoan Women’s and Children’s Hospital
Zongjie Weng: Fujian Medical University
Dong Ni: Shenzhen University
Lei Liu: Shenzhen Children’s Hospital
Luyao Zhou: Shenzhen Children’s Hospital
Nature Communications, 2025, vol. 16, issue 1, 1-15
Abstract:
Abstract Timely and accurate diagnosis of severe neonatal cerebral lesions is critical for preventing long-term neurological damage and addressing life-threatening conditions. Cranial ultrasound is the primary screening tool, but the process is time-consuming and reliant on operator’s proficiency. In this study, a deep-learning powered neonatal cerebral lesions screening system capable of automatically extracting standard views from cranial ultrasound videos and identifying cases with severe cerebral lesions is developed based on 8,757 neonatal cranial ultrasound images. The system demonstrates an area under the curve of 0.982 and 0.944, with sensitivities of 0.875 and 0.962 on internal and external video datasets, respectively. Furthermore, the system outperforms junior radiologists and performs on par with mid-level radiologists, with 55.11% faster examination efficiency. In conclusion, the developed system can automatically extract standard views and make correct diagnosis with efficiency from cranial ultrasound videos and might be useful to deploy in multiple application scenarios.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-63096-9 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-63096-9
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-63096-9
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().