How Best to Quantify Replication Success? A Simulation Study on the Comparison of Replication Success Metrics
Henk Kiers and
Don van Ravenzwaaij
Additional contact information
Don van Ravenzwaaij: University of Groningen
No wvdjf, MetaArXiv from Center for Open Science
To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. We studied this in a simulation study, by comparing a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:osf:metaar:wvdjf
Access Statistics for this paper
More papers in MetaArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().