EconPapers    
Economics at your fingertips  
 

Leveraging interpretable machine learning in intensive care

Lasse Bohlen (), Julian Rosenberger (), Patrick Zschech () and Mathias Kraus ()
Additional contact information
Lasse Bohlen: Friedrich-Alexander-Universität Erlangen-Nürnberg
Julian Rosenberger: Universität Regensburg
Patrick Zschech: Universität Leipzig
Mathias Kraus: Universität Regensburg

Annals of Operations Research, 2025, vol. 347, issue 2, No 12, 1093-1132

Abstract: Abstract In healthcare, especially within intensive care units (ICU), informed decision-making by medical professionals is crucial due to the complexity of medical data. Healthcare analytics seeks to support these decisions by generating accurate predictions through advanced machine learning (ML) models, such as boosted decision trees and random forests. While these models frequently exhibit accurate predictions across various medical tasks, they often lack interpretability. To address this challenge, researchers have developed interpretable ML models that balance accuracy and interpretability. In this study, we evaluate the performance gap between interpretable and black-box models in two healthcare prediction tasks, mortality and length-of-stay prediction in ICU settings. We focus specifically on the family of generalized additive models (GAMs) as powerful interpretable ML models. Our assessment uses the publicly available Medical Information Mart for Intensive Care dataset, and we analyze the models based on (i) predictive performance, (ii) the influence of compact feature sets (i.e., only few features) on predictive performance, and (iii) interpretability and consistency with medical knowledge. Our results show that interpretable models achieve competitive performance, with a minor decrease of 0.2–0.9 percentage points in area under the receiver operating characteristic relative to state-of-the-art black-box models, while preserving complete interpretability. This remains true even for parsimonious models that use only 2.2 % of patient features. Our study highlights the potential of interpretable models to improve decision-making in ICUs by providing medical professionals with easily understandable and verifiable predictions.

Keywords: Healthcare analytics; Interpretable machine learning; Generalized additive models; Length-of-stay prediction; Mortality prediction (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10479-024-06226-8 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:annopr:v:347:y:2025:i:2:d:10.1007_s10479-024-06226-8

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10479

DOI: 10.1007/s10479-024-06226-8

Access Statistics for this article

Annals of Operations Research is currently edited by Endre Boros

More articles in Annals of Operations Research from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-05-10
Handle: RePEc:spr:annopr:v:347:y:2025:i:2:d:10.1007_s10479-024-06226-8