EconPapers    
Economics at your fingertips  
 

An Explainable Artificial Intelligence Approach Using Graph Learning to Predict Intensive Care Unit Length of Stay

Tianjian Guo (), Indranil R. Bardhan (), Ying Ding () and Shichang Zhang ()
Additional contact information
Tianjian Guo: McCombs School of Business, The University of Texas at Austin, Austin, Texas 78705
Indranil R. Bardhan: McCombs School of Business, The University of Texas at Austin, Austin, Texas 78705
Ying Ding: School of Information, The University of Texas at Austin, Austin, Texas 78701
Shichang Zhang: Harvard Business School, Harvard University, Boston, Massachusetts 02163

Information Systems Research, 2025, vol. 36, issue 3, 1478-1501

Abstract: Intensive care units (ICUs) are critical for treating severe health conditions but represent significant hospital expenditures. Accurate prediction of ICU length of stay (LoS) can enhance hospital resource management, reduce readmissions, and improve patient care. In recent years, widespread adoption of electronic health records and advancements in artificial intelligence (AI) have facilitated accurate predictions of ICU LoS. However, there is a notable gap in the literature on explainable artificial intelligence (XAI) methods that identify interactions between model input features to predict patient health outcomes. This gap is especially noteworthy as the medical literature suggests that complex interactions between clinical features are likely to significantly impact patient health outcomes. We propose a novel graph learning-based approach that offers state-of-the-art prediction and greater interpretability for ICU LoS prediction. Specifically, our graph-based XAI model can generate interaction-based explanations supported by evidence-based medicine, which provide rich patient-level insights compared with existing XAI methods. We test the statistical significance of our XAI approach using a distance-based separation index and utilize perturbation analyses to examine the sensitivity of our model explanations to changes in input features. Finally, we validate the explanations of our graph learning model using the conceptual evaluation property (Co-12) framework and a small-scale user study of ICU clinicians. Our approach offers interpretable predictions of ICU LoS grounded in design science research, which can facilitate greater integration of AI-enabled decision support systems in clinical workflows, thereby enabling clinicians to derive greater value.

Keywords: length of stay; intensive care unit; explainable AI; deep learning; graph learning; prediction; perturbation analysis; user study; machine learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2023.0029 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:36:y:2025:i:3:p:1478-1501

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-10-06
Handle: RePEc:inm:orisre:v:36:y:2025:i:3:p:1478-1501