EconPapers    
Economics at your fingertips  
 

An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials

Andrew P. Jaciw, Li Lin and Boya Ma

Evaluation Review, 2016, vol. 40, issue 5, 410-443

Abstract: Background: Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. Objectives: This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Research design: Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Subjects: Student outcomes from six CRTs are analyzed. Measures: Achievement in math, science, reading, and writing. Results: The ratio of between-cluster variation in the slope of the moderator divided by total variance—the “moderator gap variance ratio†—is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Conclusions: Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step.

Keywords: education; design and evaluation of programs and policies (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0193841X16659600 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:evarev:v:40:y:2016:i:5:p:410-443

DOI: 10.1177/0193841X16659600

Access Statistics for this article

More articles in Evaluation Review
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:evarev:v:40:y:2016:i:5:p:410-443