Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis
Tianhang Nan,
Song Zheng,
Siyuan Qiao,
Hao Quan,
Xin Gao,
Jun Niu,
Bin Zheng,
Chunfang Guo,
Yue Zhang,
Xiaoqin Wang,
Liping Zhao,
Ze Wu,
Yaoxing Guo,
Xingyu Li,
Mingchen Zou,
Shuangdi Ning,
Yue Zhao,
Wei Qian,
Hongduo Chen,
Ruiqun Qi (),
Xinghua Gao () and
Xiaoyu Cui ()
Additional contact information
Tianhang Nan: Northeastern University
Song Zheng: The First Hospital of China Medical University
Siyuan Qiao: Fudan University
Hao Quan: Northeastern University
Xin Gao: King Abdullah University of Science and Technology (KAUST)
Jun Niu: General Hospital of Northern Theater Command
Bin Zheng: Northeastern University
Chunfang Guo: Shenyang Seventh People’s Hospital
Yue Zhang: Shengjing hospital of China Medical University
Xiaoqin Wang: King Abdullah University of Science and Technology
Liping Zhao: Zhongyi Northeast International Hospital
Ze Wu: King Abdullah University of Science and Technology (KAUST)
Yaoxing Guo: The First Hospital of China Medical University
Xingyu Li: Northeastern University
Mingchen Zou: Northeastern University
Shuangdi Ning: Northeastern University
Yue Zhao: Northeastern University
Wei Qian: Northeastern University
Hongduo Chen: The First Hospital of China Medical University
Ruiqun Qi: The First Hospital of China Medical University
Xinghua Gao: The First Hospital of China Medical University
Xiaoyu Cui: Northeastern University
Nature Communications, 2025, vol. 16, issue 1, 1-14
Abstract:
Abstract Based on the expertise of pathologists, the pixelwise manual annotation has provided substantial support for training deep learning models of whole slide images (WSI)-assisted diagnostic. However, the collection of pixelwise annotation demands massive annotation time from pathologists, leading to a high burden of medical manpower resources, hindering to construct larger datasets and more precise diagnostic models. To obtain pathologists’ expertise with minimal pathologist workloads then achieve precise diagnostics, we collect the image review patterns of pathologists by eye-tracking devices. Simultaneously, we design a deep learning system: Pathology Expertise Acquisition Network (PEAN), based on the collected visual patterns, which can decode pathologists’ expertise and then diagnose WSIs. Eye-trackers reduce the time required for annotating WSIs to 4%, of the manual annotation. We evaluate PEAN on 5881 WSIs and 5 categories of skin lesions, achieving a high area under the curve of 0.992 and an accuracy of 96.3% on diagnostic prediction. This study fills the gap in existing models’ inability to learn from the diagnostic processes of pathologists. Its efficient data annotation and precise diagnostics provide assistance in both large-scale data collection and clinical care.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-60307-1 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-60307-1
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-60307-1
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().