Assessing Rater Performance without a "Gold Standard" Using Consensus Theory
Susan C. Weller and
N. Clay Mann
Medical Decision Making, 1997, vol. 17, issue 1, 71-79
Abstract:
This study illustrates the use of consensus theory to assess the diagnostic perform ances of raters and to estimate case diagnoses in the absence of a criterion or "gold" standard. A description is provided of how consensus theory "pools" information pro vided by raters, estimating rater competencies and differentially weighting their re sponses. Although the model assumes that raters respond without bias (i.e., sensitivity = specificity), a Monte Carlo simulation with 1,200 data sets shows that model esti mates appear to be robust even with bias. The model is illustrated on a set of elbow radiographs, and consensus-model estimates are compared with those obtained from follow-up data. Results indicate that with high rater competencies, the model retrieves accurate estimates of competency and case diagnoses even when raters' responses are biased. Key words: clinical competence; interobserver variation; diagnostic evalu ation ; models—mathematical; consensus theory. (Med Decis Making 1997;17:71- 79)
Date: 1997
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0272989X9701700108 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:medema:v:17:y:1997:i:1:p:71-79
DOI: 10.1177/0272989X9701700108
Access Statistics for this article
More articles in Medical Decision Making
Bibliographic data for series maintained by SAGE Publications ().