Computational Reproducibility in Finance: Evidence from 1,000 Tests
Christophe Pérignon,
Olivier Akmansoy,
Christophe Hurlin (),
Anna Dreber,
Felix Holzmeister,
Jürgen Huber,
Magnus Johannesson,
Michael Kirchler,
Albert Menkveld,
Michael Razen and
Utz Weitzel
Additional contact information
Christophe Hurlin: LEO - Laboratoire d'Économie d'Orleans [2022-...] - UO - Université d'Orléans - UT - Université de Tours - UCA - Université Clermont Auvergne
Post-Print from HAL
Abstract:
Abstract We analyze the computational reproducibility of more than 1,000 empirical answers to 6 research questions in finance provided by 168 research teams. Running the researchers' code on the same raw data regenerates exactly the same results only 52% of the time. Reproducibility is higher for researchers with better coding skills and those exerting more effort. It is lower for more technical research questions, more complex code, and results lying in the tails of the distribution. Researchers exhibit overconfidence when assessing the reproducibility of their own research. We provide guidelines for finance researchers and discuss implementable reproducibility policies for academic journals.
Date: 2024-11-01
References: Add references at CitEc
Citations:
Published in Review of Financial Studies, 2024, 37 (11), pp.3558-3593. ⟨10.1093/rfs/hhae029⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
Journal Article: Computational Reproducibility in Finance: Evidence from 1,000 Tests (2024) 
Working Paper: Computational Reproducibility in Finance: Evidence from 1,000 Tests (2022) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04797779
DOI: 10.1093/rfs/hhae029
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().