EconPapers    
Economics at your fingertips  
 

Measuring Agreement about Ranked Decision Choices for a Single Subject

Riffenburgh Robert H and Johnstone Peter A
Additional contact information
Riffenburgh Robert H: Naval Medical Center San Diego and San Diego State University
Johnstone Peter A: Indiana University School of Medicine

The International Journal of Biostatistics, 2009, vol. 5, issue 1, 14

Abstract: Introduction. When faced with a medical classification, clinicians often rank-order the likelihood of potential diagnoses, treatment choices, or prognoses as a way to focus on likely occurrences without dropping rarer ones from consideration. To know how well clinicians agree on such rankings might help extend the realm of clinical judgment farther into the purview of evidence-based medicine. If rankings by different clinicians agree better than chance, the order of assignments and their relative likelihoods may justifiably contribute to medical decisions. If the agreement is no better than chance, the ranking should not influence the medical decision.Background. Available rank-order methods measure agreement over a set of decision choices by two rankers or by a set of rankers over two choices (rank correlation methods), or an overall agreement over a set of choices by a set of rankers (Kendall's W), but will not measure agreement about a single decision choice across a set of rankers. Rating methods (e.g. kappa) assign multiple subjects to nominal categories rather than ranking possible choices about a single subject and will not measure agreement about a single decision choice across a set of rankers.Method. In this article, we pose an agreement coefficient A for measuring agreement among a set of clinicians about a single decision choice and compare several potential forms of A. A takes on the value 0 when agreement is random and 1 when agreement is perfect. It is shown that A = 1 - observed disagreement/maximum disagreement. A particular form of A is recommended and tables of 5% and 10% significant values of A are generated for common numbers of ranks and rankers.Examples. In the selection of potential treatment assignments by a Tumor Board to a patient with a neck mass, there is no significant agreement about any treatment. Another example involves ranking decisions about a proposed medical research protocol by an Institutional Review Board (IRB). The decision to pass a protocol with minor revisions shows agreement at the 5% significance level, adequate for a consistent decision.

Keywords: agreement; decision making; rank-order; tumor board; IRB; Institutional Review Board (search for similar items in EconPapers)
Date: 2009
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.2202/1557-4679.1113 (text/html)
For access to full text, subscription to the journal or payment for the individual article is required.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:bpj:ijbist:v:5:y:2009:i:1:n:17

Ordering information: This journal article can be ordered from
https://www.degruyter.com/journal/key/ijb/html

DOI: 10.2202/1557-4679.1113

Access Statistics for this article

The International Journal of Biostatistics is currently edited by Antoine Chambaz, Alan E. Hubbard and Mark J. van der Laan

More articles in The International Journal of Biostatistics from De Gruyter
Bibliographic data for series maintained by Peter Golla ().

 
Page updated 2025-03-19
Handle: RePEc:bpj:ijbist:v:5:y:2009:i:1:n:17