Testing for Underpowered Literatures
Stefan Faridani
Papers from arXiv.org
Abstract:
How many experimental studies would have come to different conclusions had they been run on larger samples? I show how to estimate the expected number of statistically significant results that a set of experiments would have reported had their sample sizes all been counterfactually increased. The proposed deconvolution estimator is asymptotically normal and adjusts for publication bias. Unlike related methods, this approach requires no assumptions of any kind about the distribution of true intervention treatment effects. An application to randomized trials (RCTs) published in economics journals finds that doubling every sample would increase the power of t-tests by 7.2 percentage points on average. This effect is smaller than for non-RCTs and comparable to systematic replications in laboratory psychology where previous studies enabled more accurate power calculations. This suggests that RCTs are on average relatively insensitive to sample size increases. Funders should generally consider sponsoring more experiments rather than fewer, larger ones.
Date: 2024-06, Revised 2025-04
New Economics Papers: this item is included in nep-ecm and nep-exp
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2406.13122 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2406.13122
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().