Economics at your fingertips  

How Best to Quantify Replication Success? A Simulation Study on the Comparison of Replication Success Metrics

Jasmine Muradchanian, Rink Hoekstra, Henk Kiers and Don van Ravenzwaaij
Additional contact information
Don van Ravenzwaaij: University of Groningen

No wvdjf, MetaArXiv from Center for Open Science

Abstract: To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. We studied this in a simulation study, by comparing a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.

Date: 2020-08-05
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed

Downloads: (external link)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

DOI: 10.31219/

Access Statistics for this paper

More papers in MetaArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().

Page updated 2020-09-12
Handle: RePEc:osf:metaar:wvdjf