Experimental education research: rethinking why, how and when to use random assignment
Sam Sims (),
Matthew Inglis (),
Hugues Lortie-Forgues (),
Ben Styles () and
Ben Weidmann ()
Additional contact information
Sam Sims: UCL Centre for Education Policy and Equaliising Opportunities, University College London
Matthew Inglis: Centre for Mathematical Cognition, Loughborough University
Hugues Lortie-Forgues: Centre for Mathematical Cognition, Loughborough University
Ben Styles: NFER
Ben Weidmann: Skills Lab, Harvard University
No 23-07, CEPEO Working Paper Series from UCL Centre for Education Policy and Equalising Opportunities
Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of different approaches. However, a combination of small effect sizes, wide confidence intervals, and treatment effect heterogeneity means that researchers have largely failed to achieve this goal. We argue that quasi-experimental methods and multi-site trials will often be superior for informing educators' decisions on the grounds that they can achieve greater precision and better address heterogeneity. Experimental research remains valuable in applied education research. However, it should primarily be used to test theoretical models, which can in turn inform educators' mental models, rather than attempting to directly inform decision making. Since comparable effect size estimates are not of interest when testing educational theory, researchers can and should improve the power of theory-informing experiments by using more closely aligned (i.e., valid) outcome measures. We argue that this approach would reduce wasteful research spending and make the research that does go ahead more statistically informative, thus improving the return on investment in educational research.
Keywords: randomized controlled trials; education; research; experiments; policy (search for similar items in EconPapers)
JEL-codes: C90 C93 I20 I21 (search for similar items in EconPapers)
Pages: 30 pages
Date: 2023-07, Revised 2023-08
New Economics Papers: this item is included in nep-exp
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
https://repec-cepeo.ucl.ac.uk/cepeow/cepeowp23-07r1.pdf Revised version, 2023 (application/pdf)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:ucl:cepeow:23-07
Access Statistics for this paper
More papers in CEPEO Working Paper Series from UCL Centre for Education Policy and Equalising Opportunities Contact information at EDIRC.
Bibliographic data for series maintained by Jake Anders ().