Economics at your fingertips  

Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

Colin F. Camerer, Anna Dreber, Felix Holzmeister, Teck Ho (), Jürgen Huber, Magnus Johannesson, Michael Kirchler, Gideon Nave, Brian A. Nosek (), Thomas Pfeiffer, Adam Altmejd, Nick Buttrick, Taizan Chan, Yiling Chen, Eskil Forsell, Anup Gampa, Emma Heikensten, Lily Hummer, Taisuke Imai, Siri Isaksson, Dylan Manfredi, Julia Rose, Eric-Jan Wagenmakers and Hang Wu
Additional contact information
Colin F. Camerer: California Institute of Technology
Jürgen Huber: University of Innsbruck
Michael Kirchler: University of Innsbruck
Gideon Nave: University of Pennsylvania
Brian A. Nosek: University of Virginia
Thomas Pfeiffer: New Zealand Institute for Advanced Study
Nick Buttrick: University of Virginia
Taizan Chan: National University of Singapore
Yiling Chen: Harvard University
Eskil Forsell: Spotify Sweden AB
Anup Gampa: University of Virginia
Emma Heikensten: Stockholm School of Economics
Lily Hummer: Center for Open Science
Taisuke Imai: LMU Munich
Siri Isaksson: Stockholm School of Economics
Dylan Manfredi: University of Pennsylvania
Julia Rose: University of Innsbruck
Eric-Jan Wagenmakers: University of Amsterdam
Hang Wu: Harbin Institute of Technology

Nature Human Behaviour, 2018, vol. 2, issue 9, 637-644

Abstract: Abstract Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

Date: 2018
References: Add references at CitEc
Citations: View citations in EconPapers (84)

Downloads: (external link) Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
Working Paper: Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015 (2018)
Working Paper: Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015 (2018) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

Ordering information: This journal article can be ordered from

DOI: 10.1038/s41562-018-0399-z

Access Statistics for this article

Nature Human Behaviour is currently edited by Stavroula Kousta

More articles in Nature Human Behaviour from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

Page updated 2024-06-08
Handle: RePEc:nat:nathum:v:2:y:2018:i:9:d:10.1038_s41562-018-0399-z