Postoperative Karnofsky performance status prediction in patients with IDH wild-type glioblastoma: A multimodal approach integrating clinical and deep imaging features
Tomoki Sasagasako,
Akihiko Ueda,
Yohei Mineharu,
Yusuke Mochizuki,
Souichiro Doi,
Silsu Park,
Yukinori Terada,
Noritaka Sano,
Masahiro Tanji,
Yoshiki Arakawa and
Yasushi Okuno
PLOS ONE, 2024, vol. 19, issue 11, 1-15
Abstract:
Background and purpose: Glioblastoma is a highly aggressive brain tumor with limited survival that poses challenges in predicting patient outcomes. The Karnofsky Performance Status (KPS) score is a valuable tool for assessing patient functionality and contributes to the stratification of patients with poor prognoses. This study aimed to develop a 6-month postoperative KPS prediction model by combining clinical data with deep learning-based image features from pre- and postoperative MRI scans, offering enhanced personalized care for glioblastoma patients. Materials and methods: Using 1,476 MRI datasets from the Brain Tumor Segmentation Challenge 2020 public database, we pretrained two variational autoencoders (VAEs). Imaging features from the latent spaces of the VAEs were used for KPS prediction. Neural network-based KPS prediction models were developed to predict scores below 70 at 6 months postoperatively. In this retrospective single-center analysis, we incorporated clinical parameters and pre- and postoperative MRI images from 150 newly diagnosed IDH wild-type glioblastoma, divided into training (100 patients) and test (50 patients) sets. In training set, the performance of these models was evaluated using the area under the curve (AUC), calculated through fivefold cross-validation repeated 10 times. The final evaluation of the developed models assessed in the test set. Results: Among the 150 patients, 61 had 6-month postoperative KPS scores below 70 and 89 scored 70 or higher. We developed three models: a clinical-based model, an MRI-based model, and a multimodal model that incorporated both clinical parameters and MRI features. In the training set, the mean AUC was 0.785±0.051 for the multimodal model, which was significantly higher than the AUCs of the clinical-based model (0.716±0.059, P = 0.038) using only clinical parameters and the MRI-based model (0.651±0.028, P
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0303002 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 03002&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0303002
DOI: 10.1371/journal.pone.0303002
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().