Examining the replicability of online experiments selected by a decision market
Felix Holzmeister,
Magnus Johannesson,
Colin F. Camerer,
Yiling Chen,
Teck-Hua Ho,
Suzanne Hoogeveen,
Juergen Huber,
Noriko Imai,
Taisuke Imai,
Lawrence Jin,
Michael Kirchler,
Alexander Ly,
Benjamin Mandl,
Dylan Manfredi,
Gideon Nave,
Brian A. Nosek,
Thomas Pfeiffer,
Alexandra Sarafoglou,
Rene Schwaiger,
Eric-Jan Wagenmakers,
Viking Waldén and
Anna Dreber
Additional contact information
Colin F. Camerer: California Institute of Technology
Yiling Chen: Harvard University
Teck-Hua Ho: Nanyang Technological University
Suzanne Hoogeveen: Utrecht University
Juergen Huber: University of Innsbruck
Noriko Imai: Osaka University
Taisuke Imai: Osaka University
Lawrence Jin: National University of Singapore
Michael Kirchler: University of Innsbruck
Alexander Ly: University of Amsterdam
Benjamin Mandl: Independent Researcher
Dylan Manfredi: University of Pennsylvania
Gideon Nave: University of Pennsylvania
Brian A. Nosek: University of Virginia
Thomas Pfeiffer: Massey University
Alexandra Sarafoglou: University of Amsterdam
Rene Schwaiger: University of Innsbruck
Eric-Jan Wagenmakers: University of Amsterdam
Viking Waldén: Sveriges Riksbank
Nature Human Behaviour, 2025, vol. 9, issue 2, 316-330
Abstract:
Abstract Here we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015–2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with 2 randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41562-024-02062-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nathum:v:9:y:2025:i:2:d:10.1038_s41562-024-02062-9
Ordering information: This journal article can be ordered from
https://www.nature.com/nathumbehav/
DOI: 10.1038/s41562-024-02062-9
Access Statistics for this article
Nature Human Behaviour is currently edited by Stavroula Kousta
More articles in Nature Human Behaviour from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().