Statistical inference of Gwet’s AC1 coefficient for multiple raters and binary outcomes
Tetsuji Ohyama
Communications in Statistics - Theory and Methods, 2021, vol. 50, issue 15, 3564-3572
Abstract:
Cohen’s kappa and intraclass kappa are widely used for assessing the degree of agreement between two raters with binary outcomes. However, many authors have pointed out its paradoxical behavior, that comes from the dependence on the prevalence of a trait under study. To overcome the limitation, Gwet (2008) proposed an alternative and more stable agreement coefficient referred to as the AC1. In this paper, we discuss a likelihood-based inference of the AC1 in the case of multiple raters and binary outcomes. Construction of confidence intervals is mainly discussed. In addition, hypothesis testing, sample size estimation, and the method of assessing the effect of subject covariates on agreement are also presented. The performance of the estimator of AC1 and its confidence intervals are investigated in a simulation study, and an example is presented.
Date: 2021
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1080/03610926.2019.1708397 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:lstaxx:v:50:y:2021:i:15:p:3564-3572
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/lsta20
DOI: 10.1080/03610926.2019.1708397
Access Statistics for this article
Communications in Statistics - Theory and Methods is currently edited by Debbie Iscoe
More articles in Communications in Statistics - Theory and Methods from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().