EconPapers    
Economics at your fingertips  
 

Benchmarking large language models for biomedical natural language processing applications and recommendations

Qingyu Chen, Yan Hu, Xueqing Peng, Qianqian Xie, Qiao Jin, Aidan Gilson, Maxwell B. Singer, Xuguang Ai, Po-Ting Lai, Zhizheng Wang, Vipina K. Keloth, Kalpana Raja, Jimin Huang, Huan He, Fongci Lin, Jingcheng Du, Rui Zhang, W. Jim Zheng, Ron A. Adelman, Zhiyong Lu () and Hua Xu ()
Additional contact information
Qingyu Chen: Yale University
Yan Hu: University of Texas Health Science at Houston
Xueqing Peng: Yale University
Qianqian Xie: Yale University
Qiao Jin: National Institutes of Health
Aidan Gilson: Yale University
Maxwell B. Singer: Yale University
Xuguang Ai: Yale University
Po-Ting Lai: National Institutes of Health
Zhizheng Wang: National Institutes of Health
Vipina K. Keloth: Yale University
Kalpana Raja: Yale University
Jimin Huang: Yale University
Huan He: Yale University
Fongci Lin: Yale University
Jingcheng Du: University of Texas Health Science at Houston
Rui Zhang: University of Minnesota
W. Jim Zheng: University of Texas Health Science at Houston
Ron A. Adelman: Yale University
Zhiyong Lu: National Institutes of Health
Hua Xu: Yale University

Nature Communications, 2025, vol. 16, issue 1, 1-16

Abstract: Abstract The rapid growth of biomedical literature poses challenges for manual knowledge curation and synthesis. Biomedical Natural Language Processing (BioNLP) automates the process. While Large Language Models (LLMs) have shown promise in general domains, their effectiveness in BioNLP tasks remains unclear due to limited benchmarks and practical guidelines. We perform a systematic evaluation of four LLMs—GPT and LLaMA representatives—on 12 BioNLP benchmarks across six applications. We compare their zero-shot, few-shot, and fine-tuning performance with the traditional fine-tuning of BERT or BART models. We examine inconsistencies, missing information, hallucinations, and perform cost analysis. Here, we show that traditional fine-tuning outperforms zero- or few-shot LLMs in most tasks. However, closed-source LLMs like GPT-4 excel in reasoning-related tasks such as medical question answering. Open-source LLMs still require fine-tuning to close performance gaps. We find issues like missing information and hallucinations in LLM outputs. These results offer practical insights for applying LLMs in BioNLP.

Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-56989-2 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56989-2

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-56989-2

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-05-10
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56989-2