EconPapers    
Economics at your fingertips  
 

SHREC: A framework for advancing next-generation computational phenotyping with large language models

Sarah Pungitore, Shashank Yadav, Molly Douglas, Jarrod Mosier and Vignesh Subbian

PLOS Digital Health, 2026, vol. 5, issue 2, 1-13

Abstract: Computational phenotyping is a central informatics activity with resulting cohorts supporting a wide variety of applications. However, it is time-intensive because of manual data review and limited automation. Since LLMs have demonstrated promising capabilities for text classification, comprehension, and generation, we posit they will perform well at repetitive manual review tasks traditionally performed by human experts. To support next-generation computational phenotyping, we developed SHREC, a framework for integrating LLMs into end-to-end phenotyping pipelines. We applied and tested three lightweight LLMs (Gemma2 27 billion, Mistral Small 24 billion, and Phi-4 14 billion) to classify concepts and phenotype patients using phenotypes for ARF respiratory support therapies. All models performed well on concept classification, with the best (Mistral) achieving an AUROC of 0.896. For phenotyping, models demonstrated near-perfect specificity for all phenotypes with the top-performing model (Mistral) achieving an average AUROC of 0.853 for single-therapy phenotypes. In conclusion, lightweight LLMs can assist researchers with resource-intensive phenotyping tasks. Several advantages of LLMs included their ability to adapt to new tasks with prompt engineering alone and their ability to incorporate raw EHR data. Future steps include determining optimal strategies for integrating biomedical data and understanding reasoning errors.Author summary: In our research, we explored how large language models like ChatGPT could help make the process of identifying patient groups from electronic health records faster and less labor-intensive. Traditionally, defining these patient groups requires careful manual review of large amounts of clinical data, which can be time-consuming and costly. We developed a framework called SHREC that integrates language models into these workflows, allowing the models to classify relevant clinical information and help create patient groups automatically. We tested several models on respiratory support therapies and found that even relatively small models were highly effective at accurately identifying concepts and patients. Our work shows that language models can complement human expertise, reducing the effort needed for routine tasks while still maintaining high accuracy. By demonstrating how these tools can fit into the larger research process, we hope to encourage further development of methods that make clinical data analysis faster, more efficient, and more accessible to researchers.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001217 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 01217&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0001217

DOI: 10.1371/journal.pdig.0001217

Access Statistics for this article

More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().

 
Page updated 2026-02-15
Handle: RePEc:plo:pdig00:0001217