Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results
Alwyn Young
The Quarterly Journal of Economics, 2019, vol. 134, issue 2, 557-598
Abstract:
I follow R. A. Fisher'sThe Design of Experiments (1935), using randomization statistical inference to test the null hypothesis of no treatment effects in a comprehensive sample of 53 experimental papers drawn from the journals of the American Economic Association. In the average paper, randomization tests of the significance of individual treatment effects find 13% to 22% fewer significant results than are found using authors’ methods. In joint tests of multiple treatment effects appearing together in tables, randomization tests yield 33% to 49% fewer statistically significant results than conventional tests. Bootstrap and jackknife methods support and confirm the randomization results.
Date: 2019
References: Add references at CitEc
Citations: View citations in EconPapers (215)
Downloads: (external link)
http://hdl.handle.net/10.1093/qje/qjy029 (application/pdf)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oup:qjecon:v:134:y:2019:i:2:p:557-598.
Ordering information: This journal article can be ordered from
https://academic.oup.com/journals
Access Statistics for this article
The Quarterly Journal of Economics is currently edited by Robert J. Barro, Lawrence F. Katz, Nathan Nunn, Andrei Shleifer and Stefanie Stantcheva
More articles in The Quarterly Journal of Economics from President and Fellows of Harvard College
Bibliographic data for series maintained by Oxford University Press ().