EconPapers    
Economics at your fingertips  
 

Model-Based ROC Curve: Examining the Effect of Case Mix and Model Calibration on the ROC Plot

Mohsen Sadatsafavi, Paramita Saha-Chaudhuri and John Petkau
Additional contact information
Mohsen Sadatsafavi: Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, BC, Canada
Paramita Saha-Chaudhuri: Department of Mathematics and Statistics, University of Vermont, Burlington, VT, USA
John Petkau: Department of Statistics, The University of British Columbia, Vancouver, BC, Canada

Medical Decision Making, 2022, vol. 42, issue 4, 487-499

Abstract: Background The performance of risk prediction models is often characterized in terms of discrimination and calibration. The receiver-operating characteristic (ROC) curve is widely used for evaluating model discrimination. However, when comparing ROC curves across different samples, the effect of case mix makes the interpretation of discrepancies difficult. Further, compared with model discrimination, evaluating model calibration has not received the same level of attention. Current methods for examining model calibration require specification of smoothing or grouping factors. Methods We introduce the “model-based†ROC curve (mROC) to assess model calibration and the effect of case mix during external validation. The mROC curve is the ROC curve that should be observed if the prediction model is calibrated in the external population. We show that calibration-in-the-large and the equivalence of mROC and ROC curves are together sufficient conditions for the model to be calibrated. Based on this, we propose a novel statistical test for calibration that, unlike current methods, does not require any subjective specification of smoothing or grouping factors. Results Through a stylized example, we demonstrate how mROC separates the effect of case mix and model miscalibration when externally validating a risk prediction model. We present the results of simulation studies that confirm the properties of the new calibration test. A case study on predicting the risk of acute exacerbations of chronic obstructive pulmonary disease puts the developments in a practical context. R code for the implementation of this method is provided. Conclusion mROC can easily be constructed and used to interpret the effect of case mix and calibration on the ROC plot. Given the popularity of ROC curves among applied investigators, this framework can further promote assessment of model calibration. Highlights Compared with examining model discrimination, examining model calibration has not received the same level of attention among investigators who develop or examine risk prediction models. This article introduces the model-based ROC (mROC) curve as the basis for graphical and statistical examination of model calibration on the ROC plot. This article introduces a formal statistical test based on mROC for examining model calibration that does not require arbitrary smoothing or grouping factors. Investigators who develop or validate risk prediction models can now also use the popular ROC plot for examining model calibration, as a critical but often neglected component in predictive analytics.

Keywords: clinical prediction models; model calibration; model validation; receiver-operating characteristic (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0272989X211050909 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:medema:v:42:y:2022:i:4:p:487-499

DOI: 10.1177/0272989X211050909

Access Statistics for this article

More articles in Medical Decision Making
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:medema:v:42:y:2022:i:4:p:487-499