A systematic review of machine learning-based prognostic models for acute pancreatitis: Towards improving methods and reporting quality
Brian Critelli,
Amier Hassan,
Ila Lahooti,
Lydia Noh,
Jun Sung Park,
Kathleen Tong,
Ali Lahooti,
Nathan Matzko,
Jan Niklas Adams,
Lukas Liss,
Justin Quion,
David Restrepo,
Melica Nikahd,
Stacey Culp,
Adam Lacy-Hulbert,
Cate Speake,
James Buxbaum,
Jason Bischof,
Cemal Yazici,
Anna Evans-Phillips,
Sophie Terp,
Alexandra Weissman,
Darwin Conwell,
Philip Hart,
Mitchell Ramsey,
Somashekar Krishna,
Samuel Han,
Erica Park,
Raj Shah,
Venkata Akshintala,
John A Windsor,
Nikhil K Mull,
Georgios Papachristou,
Leo Anthony Celi and
Peter Lee
PLOS Medicine, 2025, vol. 22, issue 2, 1-19
Abstract:
Background: An accurate prognostic tool is essential to aid clinical decision-making (e.g., patient triage) and to advance personalized medicine. However, such a prognostic tool is lacking for acute pancreatitis (AP). Increasingly machine learning (ML) techniques are being used to develop high-performing prognostic models in AP. However, methodologic and reporting quality has received little attention. High-quality reporting and study methodology are critical for model validity, reproducibility, and clinical implementation. In collaboration with content experts in ML methodology, we performed a systematic review critically appraising the quality of methodology and reporting of recently published ML AP prognostic models. Methods/findings: Using a validated search strategy, we identified ML AP studies from the databases MEDLINE and EMBASE published between January 2021 and December 2023. We also searched pre-print servers medRxiv, bioRxiv, and arXiv for pre-prints registered between January 2021 and December 2023. Eligibility criteria included all retrospective or prospective studies that developed or validated new or existing ML models in patients with AP that predicted an outcome following an episode of AP. Meta-analysis was considered if there was homogeneity in the study design and in the type of outcome predicted. For risk of bias (ROB) assessment, we used the Prediction Model Risk of Bias Assessment Tool. Quality of reporting was assessed using the Transparent Reporting of a Multivariable Prediction Model of Individual Prognosis or Diagnosis—Artificial Intelligence (TRIPOD+AI) statement that defines standards for 27 items that should be reported in publications using ML prognostic models. The search strategy identified 6,480 publications of which 30 met the eligibility criteria. Studies originated from China (22), the United States (4), and other (4). All 30 studies developed a new ML model and none sought to validate an existing ML model, producing a total of 39 new ML models. AP severity (23/39) or mortality (6/39) were the most common outcomes predicted. The mean area under the curve for all models and endpoints was 0.91 (SD 0.08). The ROB was high for at least one domain in all 39 models, particularly for the analysis domain (37/39 models). Steps were not taken to minimize over-optimistic model performance in 27/39 models. Due to heterogeneity in the study design and in how the outcomes were defined and determined, meta-analysis was not performed. Studies reported on only 15/27 items from TRIPOD+AI standards, with only 7/30 justifying sample size and 13/30 assessing data quality. Other reporting deficiencies included omissions regarding human–AI interaction (28/30), handling low-quality or incomplete data in practice (27/30), sharing analytical codes (25/30), study protocols (25/30), and reporting source data (19/30). Conclusions: There are significant deficiencies in the methodology and reporting of recently published ML based prognostic models in AP patients. These undermine the validity, reproducibility, and implementation of these prognostic models despite their promise of superior predictive accuracy. Registration: Research Registry (reviewregistry1727) Why was this study done?: What did the researchers do and find?: What do these findings mean?: Brian Critelli, Amier Hassan and colleagues systematically review studies published in 2021-2023 that use machine learning to develop prognostic models in acute pancreatitis and critically appraise the quality of methodology and reporting therein.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1004432 (text/html)
https://journals.plos.org/plosmedicine/article/fil ... 04432&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pmed00:1004432
DOI: 10.1371/journal.pmed.1004432
Access Statistics for this article
More articles in PLOS Medicine from Public Library of Science
Bibliographic data for series maintained by plosmedicine ().