An Interesting Problem in the Estimation of Scoring Reliability
Samuel A. Livingston
Journal of Educational and Behavioral Statistics, 2004, vol. 29, issue 3, 333-341
Abstract:
A performance assessment consisting of 10 separate exercises was scored with a randomized scoring procedure. All responses to each exercise were rated once; in addition, a randomly selected subset of the responses to each exercise received an independent second rating. Each second rating was averaged with the corresponding first rating before the scores were computed. This article presents a method for estimating the scoring reliability (interrater reliability) coefficient and the standard error of scoring for the resulting scores. The report concludes with some numerical examples showing how the reliability estimation procedure can be used to estimate the effect of varying the proportions of responses that are double-scored.
Keywords: interrater reliability; performance assessment; scoring reliability (search for similar items in EconPapers)
Date: 2004
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.3102/10769986029003333 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:jedbes:v:29:y:2004:i:3:p:333-341
DOI: 10.3102/10769986029003333
Access Statistics for this article
More articles in Journal of Educational and Behavioral Statistics
Bibliographic data for series maintained by SAGE Publications ().