EconPapers    
Economics at your fingertips  
 

Comparison of Automatic and Expert Teachers’ Rating of Computerized English Listening-Speaking Test

Cao Linlin

English Language Teaching, 2020, vol. 13, issue 1, 18

Abstract: Through Many-Facet Rasch analysis, this study explores the rating differences between 1 computer automatic rater and 5 expert teacher raters on scoring 119 students in a computerized English listening-speaking test. Results indicate that both automatic and the teacher raters demonstrate good inter-rater reliability, though the automatic rater indicates less intra-rater reliability than college teacher and high school teacher raters under the stringent infit limits. There’s no central tendency and random effect for both automatic and human raters. This research provides evidence for the automatic rating reform of the computerized English listening-speaking test (CELST) in Guangdong NMET and encourages the application of MFRM in actual score monitoring.

Date: 2020
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://ccsenet.org/journal/index.php/elt/article/download/0/0/41509/43064 (application/pdf)
https://ccsenet.org/journal/index.php/elt/article/view/0/41509 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:ibn:eltjnl:v:13:y:2020:i:1:p:18

Access Statistics for this article

More articles in English Language Teaching from Canadian Center of Science and Education Contact information at EDIRC.
Bibliographic data for series maintained by Canadian Center of Science and Education ().

 
Page updated 2025-03-19
Handle: RePEc:ibn:eltjnl:v:13:y:2020:i:1:p:18