Assessing inter-rater agreement in Stata
Daniel Klein ()
Additional contact information
Daniel Klein: University of Kassel
German Stata Users' Group Meetings 2017 from Stata Users Group
Abstract:
Despite its well-known weaknesses and existing alternatives in the literature, the Kappa coefficient (Cohen 1960: Fleiss 1971) remains the most frequently applied statistic when it comes to quantifying agreement among raters. It is also the only available measure in official Stata that is explicitly dedicated to assessing inter-rater agreement for categorical data. In this presentation, I briefly review Cohen's Kappa and five related statistics within a general framework of chance-corrected agreement coefficients, discussed in Gwet (2014). The presentation covers the generalization of all measures to multiple raters, weights for partial disagreement that are suitable for any data level of measurement, the treatment of missing ratings, and a new probabilistic method for benchmarking the estimated coefficients. I introduce the kappaetc command, which implements these concepts.
Date: 2017-09-20
References: Add references at CitEc
Citations:
Downloads: (external link)
http://repec.org/dsug2017/Germany17_Klein.pdf presentation materials (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:boc:dsug17:07
Access Statistics for this paper
More papers in German Stata Users' Group Meetings 2017 from Stata Users Group Contact information at EDIRC.
Bibliographic data for series maintained by Christopher F Baum ().