More Than One Replication Study Is Needed for Unambiguous Tests of Replication
Larry V. Hedges and
Jacob M. Schauer
Additional contact information
Larry V. Hedges: Northwestern University
Jacob M. Schauer: Institute for Policy Research, Northwestern University
Journal of Educational and Behavioral Statistics, 2019, vol. 44, issue 5, 543-570
Abstract:
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the statistical test for whether two studies obtain the same effect is smaller than the power of either study to detect an effect in the first place. Thus, unless the original study and the replication study have unusually high power (e.g., power of 98%), a single replication study will not have adequate sensitivity to provide an unambiguous evaluation of replication.
Keywords: educational policy; evaluation; experimental design; meta-analysis; program evaluation; research methodology; validity/reliability (search for similar items in EconPapers)
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (9)
Downloads: (external link)
https://journals.sagepub.com/doi/10.3102/1076998619852953 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:jedbes:v:44:y:2019:i:5:p:543-570
DOI: 10.3102/1076998619852953
Access Statistics for this article
More articles in Journal of Educational and Behavioral Statistics
Bibliographic data for series maintained by SAGE Publications ().