Channeling Fisher: randomization tests and the statistical insignificance of seemingly significant experimental results
Alwyn Young
LSE Research Online Documents on Economics from London School of Economics and Political Science, LSE Library
Abstract:
I follow R. A. Fisher's The Design of Experiments (1935), using randomization statistical inference to test the null hypothesis of no treatment effects in a comprehensive sample of 53 experimental papers drawn from the journals of the American Economic Association. In the average paper, randomization tests of the significance of individual treatment effects find 13% to 22% fewer significant results than are found using authors’ methods. In joint tests of multiple treatment effects appearing together in tables, randomization tests yield 33% to 49% fewer statistically significant results than conventional tests. Bootstrap and jackknife methods support and confirm the randomization results.
JEL-codes: C12 C90 (search for similar items in EconPapers)
Pages: 42 pages
Date: 2019-05-01
New Economics Papers: this item is included in nep-ecm and nep-exp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (142)
Published in Quarterly Journal of Economics, 1, May, 2019, 134(2), pp. 557 - 598. ISSN: 0033-5533
Downloads: (external link)
http://eprints.lse.ac.uk/101401/ Open access version. (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ehl:lserod:101401
Access Statistics for this paper
More papers in LSE Research Online Documents on Economics from London School of Economics and Political Science, LSE Library LSE Library Portugal Street London, WC2A 2HD, U.K.. Contact information at EDIRC.
Bibliographic data for series maintained by LSERO Manager (lseresearchonline@lse.ac.uk).