EconPapers    
Economics at your fingertips  
 

Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach

Katherine Du, Stavan Shah, Sandeep Chandra Bollepalli, Mohammed Nasar Ibrahim, Adarsh Gadari, Shan Sutharahan, José-Alain Sahel, Jay Chhablani and Kiran Kumar Vupparaboina

PLOS ONE, 2024, vol. 19, issue 12, 1-15

Abstract: Objectives: Various imaging features on optical coherence tomography (OCT) are crucial for identifying and defining disease progression. Establishing a consensus on these imaging features is essential, particularly for training deep learning models for disease classification. This study aims to analyze the inter-rater reliability in labeling the quality and common imaging signatures of retinal OCT scans. Methods: 500 OCT scans obtained from CIRRUS HD-OCT 5000 devices were displayed at 512x1024x128 resolution on a customizable, in-house annotation software. Each patient’s eye was represented by 16 random scans. Two masked reviewers independently labeled the quality and specific pathological features of each scan. Evaluated features included overall image quality, presence of fovea, and disease signatures including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, pigment epithelial detachment (PED), and hyperreflective material. The raw percentage agreement and Cohen’s kappa (κ) coefficient were used to evaluate concurrence between the two sets of labels. Results: Our analysis revealed κ = 0.60 for the inter-rater reliability of overall scan quality, indicating substantial agreement. In contrast, there was slight agreement in determining the cause of poor image quality (κ = 0.18). The binary determination of presence and absence of retinal disease signatures showed almost complete agreement between reviewers (κ = 0.85). Specific retinal pathologies, such as the foveal location of the scan (0.78), IRF (0.63), drusen (0.73), and PED (0.87), exhibited substantial concordance. However, less agreement was found in identifying SRF (0.52), hyperreflective dots (0.41), and hyperreflective foci (0.33). Conclusions: Our study demonstrates significant inter-rater reliability in labeling the quality and retinal pathologies on OCT scans. While some features show stronger agreement than others, these standardized labels can be utilized to create automated machine learning tools for diagnosing retinal diseases and capturing valuable pathological features in each scan. This standardization will aid in the consistency of medical diagnoses and enhance the accessibility of OCT diagnostic tools.

Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0314707 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 14707&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0314707

DOI: 10.1371/journal.pone.0314707

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2025-05-10
Handle: RePEc:plo:pone00:0314707