EconPapers    
Economics at your fingertips  
 

Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial

Reagan Mozer, Luke Miratrix, Jackie Eunjung Relyea and James S. Kim
Additional contact information
Reagan Mozer: Bentley University
Luke Miratrix: Harvard University Graduate School of Education
Jackie Eunjung Relyea: North Carolina State University
James S. Kim: Harvard University Graduate School of Education

Journal of Educational and Behavioral Statistics, 2024, vol. 49, issue 5, 780-816

Abstract: In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This process is both time and labor-intensive, which creates a persistent barrier for large-scale assessments of text. Furthermore, enriching one’s understanding of a found impact on text outcomes via secondary analyses can be difficult without additional scoring efforts. The purpose of this article is to provide a pipeline for using machine-based text analytic and data mining tools to augment traditional text-based impact analysis by analyzing impacts across an array of automatically generated text features. In this way, we can explore what an overall impact signifies in terms of how the text has evolved due to treatment. Through a case study based on a recent field trial in education, we show that machine learning can indeed enrich experimental evaluations of text by providing a more comprehensive and fine-grained picture of the mechanisms that lead to stronger argumentative writing in a first- and second-grade content literacy intervention. Relying exclusively on human scoring, by contrast, is a lost opportunity. Overall, the workflow and analytical strategy we describe can serve as a template for researchers interested in performing their own experimental evaluations of text.

Keywords: text analysis; randomized controlled trial; automated scoring; argumentative writing (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.3102/10769986231207886 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:jedbes:v:49:y:2024:i:5:p:780-816

DOI: 10.3102/10769986231207886

Access Statistics for this article

More articles in Journal of Educational and Behavioral Statistics
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:jedbes:v:49:y:2024:i:5:p:780-816