EconPapers    
Economics at your fingertips  
 

Large language models in peer review: challenges and opportunities

Zhuanlan Sun ()
Additional contact information
Zhuanlan Sun: Nanjing University of Posts and Telecommunications, High-Quality Development Evaluation Institute

Scientometrics, 2025, vol. 130, issue 10, No 10, 5503-5546

Abstract: Abstract The increasing volume of academic publications has placed considerable strain on traditional peer review systems, leading to delays, inconsistencies, and systemic inequities. The emergence and rapid development of large language models (LLMs) provide opportunities to address these challenges. In this review, we evaluate the potential roles, benefits, and limitations of LLMs in peer review processes. First, we comprehend from the literature the application of LLMs in various peer review tasks, which include serving as checklist assistants, aiding reviewer selection during the pre-peer review stage, generating automated feedback that highlights key evaluation aspects, providing human-like recommendations and scoring, detecting biases, and functioning as LLM agents throughout the peer review stage. Several approaches, including prompt engineering strategies, model evaluation protocols, and integrated architectures for editorial systems, have been proposed for implementing LLMs in scholarly peer review. Next, we discuss the challenges and limitations associated with LLM applications. These include inadequate validation of scientific content, limited domain-specific knowledge, difficulties in data analysis and result interpretation, and ethical concerns. Finally, we outline future research directions, such as the behavioral evaluation of LLMs, enhancement of their reasoning capabilities, development of benchmark datasets and prompts, fostering of effective LLM–human collaboration, and the exploration of multiagent LLM systems to promote reliable and trustworthy deployment. We conclude that while research on the application of LLMs in peer review tasks continues to advance, LLMs are currently more effective as supportive tools to aid human evaluators rather than as replacements. Existing limitations and ethical considerations highlight the need for a more in-depth evaluation of the long-term impact of integrating LLMs into peer review workflows.

Keywords: Large language models; Peer review; Academic publishing (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s11192-025-05440-w Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:scient:v:130:y:2025:i:10:d:10.1007_s11192-025-05440-w

Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/11192

DOI: 10.1007/s11192-025-05440-w

Access Statistics for this article

Scientometrics is currently edited by Wolfgang Glänzel

More articles in Scientometrics from Springer, Akadémiai Kiadó
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-26
Handle: RePEc:spr:scient:v:130:y:2025:i:10:d:10.1007_s11192-025-05440-w