Estimating the Irreducible Uncertainty in Visual Diagnosis: Statistical Modeling of Skill Using Response Models
Martin V. Pusic,
Amy Rapkiewicz,
Tenko Raykov and
Jonathan Melamed
Additional contact information
Martin V. Pusic: Department Pediatrics and Emergency Medicine, Harvard Medical School, Boston, MA, USA
Amy Rapkiewicz: Department of Pathology, NYU Long Island School of Medicine, New York, NY, USA
Tenko Raykov: College of Education, Michigan State University. East Lansing, MI, USA
Jonathan Melamed: Department of Pathology, NYU Long Island School of Medicine, New York, NY, USA
Medical Decision Making, 2023, vol. 43, issue 6, 680-691
Abstract:
Background For the representative problem of prostate cancer grading, we sought to simultaneously model both the continuous nature of the case spectrum and the decision thresholds of individual pathologists, allowing quantitative comparison of how they handle cases at the borderline between diagnostic categories. Methods Experts and pathology residents each rated a standardized set of prostate cancer histopathological images on the International Society of Urological Pathologists (ISUP) scale used in clinical practice. They diagnosed 50 histologic cases with a range of malignancy, including intermediate cases in which clear distinction was difficult. We report a statistical model showing the degree to which each individual participant can separate the cases along the latent decision spectrum. Results The slides were rated by 36 physicians in total: 23 ISUP pathologists and 13 residents. As anticipated, the cases showed a full continuous range of diagnostic severity. Cases ranged along a logit scale consistent with the consensus rating (Consensus ISUP 1: mean −0.93 [95% confidence interval {CI} −1.10 to −0.78], ISUP 2: −0.19 logits [−0.27 to −0.12]; ISUP 3: 0.56 logits [0.06–1.06]; ISUP 4 1.24 logits [1.10–1.38]; ISUP 5: 1.92 [1.80–2.04]). The best raters were able to meaningfully discriminate between all 5 ISUP categories, showing intercategory thresholds that were quantifiably precise and meaningful. Conclusions We present a method that allows simultaneous quantification of both the confusability of a particular case and the skill with which raters can distinguish the cases. Implications The technique generalizes beyond the current example to other clinical situations in which a diagnostician must impose an ordinal rating on a biological spectrum. Highlights Question: How can we quantify skill in visual diagnosis for cases that sit at the border between 2 ordinal categories—cases that are inherently difficult to diagnose? Findings: In this analysis of pathologists and residents rating prostate biopsy specimens, decision-aligned response models are calculated that show how pathologists would be likely to classify any given case on the diagnostic spectrum. Decision thresholds are shown to vary in their location and precision. Significance: Improving on traditional measures such as kappa and receiver-operating characteristic curves, this specialization of item response models allows better individual feedback to both trainees and pathologists, including better quantification of acceptable decision variation.
Keywords: medical education; uncertainty; benchmarking; diagnostic errors; visual diagnosis; prostatic neoplasms; assessment; item response theory; pathology; psychometrics (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0272989X231162095 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:medema:v:43:y:2023:i:6:p:680-691
DOI: 10.1177/0272989X231162095
Access Statistics for this article
More articles in Medical Decision Making
Bibliographic data for series maintained by SAGE Publications ().