EconPapers    
Economics at your fingertips  
 

Large-vocabulary forensic pathological analyses via prototypical cross-modal contrastive learning

Chen Shen, Chunfeng Lian (), Wanqing Zhang, Fan Wang, Jianhua Zhang, Shuanliang Fan, Xin Wei, Gongji Wang, Kehan Li, Hongshu Mu, Hao Wu, Xinggong Liang, Jianhua Ma () and Zhenyuan Wang ()
Additional contact information
Chen Shen: Xi’an Jiaotong University
Chunfeng Lian: Xi’an Jiaotong University
Wanqing Zhang: Xi’an Jiaotong University
Fan Wang: Xi’an Jiaotong University
Jianhua Zhang: Academy of Forensic Science
Shuanliang Fan: Xi’an Jiaotong University
Xin Wei: Xi’an Jiaotong University
Gongji Wang: Xi’an Jiaotong University
Kehan Li: Xi’an Jiaotong University
Hongshu Mu: Xian’yang Public Security Bureau
Hao Wu: Xi’an Jiaotong University
Xinggong Liang: Xi’an Jiaotong University
Jianhua Ma: Pazhou Lab (Huangpu)
Zhenyuan Wang: Xi’an Jiaotong University

Nature Communications, 2025, vol. 16, issue 1, 1-20

Abstract: Abstract Forensic pathology plays a vital role in determining the cause and manner of death through macroscopic and microscopic post-mortem examinations. However, the field faces challenges such as variability in outcomes, labor-intensive processes, and a shortage of skilled professionals. This paper introduces SongCi, a visual-language model tailored for forensic pathology. Leveraging advanced prototypical cross-modal self-supervised contrastive learning, SongCi improves the accuracy, efficiency, and generalizability of forensic analyses. Pre-trained and validated on a large multi-center dataset comprising over 16 million high-resolution image patches, 2, 228 vision-language pairs from post-mortem whole slide images, gross key findings, and 471 unique diagnostic outcomes, SongCi demonstrates superior performance over existing multi-modal models and computational pathology foundation models in forensic tasks. It matches experienced forensic pathologists’ capabilities, significantly outperforms less experienced practitioners, and offers robust multi-modal explainability.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-62060-x Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-62060-x

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-07-24
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x