EconPapers    
Economics at your fingertips  
 

Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support

Malte Blattmann, Adrian Lindenmeyer, Stefan Franke, Thomas Neumuth and Daniel Schneider

PLOS Digital Health, 2025, vol. 4, issue 7, 1-23

Abstract: Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model’s function. In this work, we compare three such methods on the task of predicting prostate cancer–specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks—exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors—such as spectral-normalized neural Gaussian processes (SNGP)—provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.Author summary: In this study, we address a critical challenge in applying AI to personalized medicine: models often make confident predictions even when faced with patient data unlike anything they’ve seen before. We evaluated three strategies for helping these models recognize and signal their own uncertainty, using real-world prostate cancer screening data. While all approaches performed well on familiar cases, they differed in how reliably they indicated doubt on unfamiliar patients. We discovered that methods explicitly designed to gauge how “far” a new patient’s data lies from prior examples produced far more trustworthy uncertainty estimates than techniques relying on hidden assumptions. By clearly identifying when the model is unsure, these approaches can help clinicians avoid over-reliance on AI recommendations. Our findings suggest that uncertainty-aware models could serve as safer, more transparent partners in treatment planning. Ultimately, this work takes us a step closer to AI systems that not only predict health outcomes but also responsibly signal when they might be guessing—an essential feature for trustworthy clinical decision support.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000801 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 00801&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0000801

DOI: 10.1371/journal.pdig.0000801

Access Statistics for this article

More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().

 
Page updated 2026-03-22
Handle: RePEc:plo:pdig00:0000801