Evaluating the predictive capacity of ChatGPT for academic peer review outcomes across multiple platforms
Mike Thelwall () and
Abdallah Yaghi
Additional contact information
Mike Thelwall: University of Sheffield, Information School
Abdallah Yaghi: University of Sheffield, Information School
Scientometrics, 2025, vol. 130, issue 10, No 2, 5285-5307
Abstract:
Abstract Academic peer review is at the heart of scientific quality control, yet the process is slow and time-consuming. Technology that can predict peer review outcomes may help with this, for example by fast-tracking desk rejection decisions. While previous studies have demonstrated that Large Language Models (LLMs) can predict peer review outcomes to some extent, this paper introduces two new contexts and employs a more robust method—averaging multiple ChatGPT scores. Averaging 30 ChatGPT predictions, based on reviewer guidelines and using only the submitted titles and abstracts failed to predict peer review outcomes for F1000Research (Spearman’s rho = 0.00). However, it produced mostly weak positive correlations with the quality dimensions of SciPost Physics (rho = 0.25 for validity, rho = 0.25 for originality, rho = 0.20 for significance, and rho = 0.08 for clarity) and a moderate positive correlation for papers from the International Conference on Learning Representations (ICLR) (rho = 0.38). Including article full texts increased the correlation for ICLR (rho = 0.46) and slightly improved it for F1000Research (rho = 0.09), with variable effects on the four quality dimension correlations for SciPost LaTeX files. The use of simple chain-of-thought system prompts slightly increased the correlation for F1000Research (rho = 0.10), marginally reduced it for ICLR (rho = 0.37), and further decreased it for SciPost Physics (rho = 0.16 for validity, rho = 0.18 for originality, rho = 0.18 for significance, and rho = 0.05 for clarity). Overall, the results suggest that in some contexts, ChatGPT can produce weak pre-publication quality predictions. However, their effectiveness and the optimal strategies for employing them vary considerably between platforms, journals, and conferences. Finally, the most suitable inputs for ChatGPT appear to differ depending on the platform.
Keywords: ChatGPT; Academic peer review; Journal review; Research evaluation (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s11192-025-05287-1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:scient:v:130:y:2025:i:10:d:10.1007_s11192-025-05287-1
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/11192
DOI: 10.1007/s11192-025-05287-1
Access Statistics for this article
Scientometrics is currently edited by Wolfgang Glänzel
More articles in Scientometrics from Springer, Akadémiai Kiadó
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().