Screening $p$-Hackers: Dissemination Noise as Bait
Federico Echenique and
Kevin He
Papers from arXiv.org
Abstract:
We show that adding noise before publishing data effectively screens $p$-hacked findings: spurious explanations produced by fitting many statistical models (data mining). Noise creates "baits" that affect two types of researchers differently. Uninformed $p$-hackers, who are fully ignorant of the true mechanism and engage in data mining, often fall for baits. Informed researchers, who start with an ex-ante hypothesis, are minimally affected. We show that as the number of observations grows large, dissemination noise asymptotically achieves optimal screening. In a tractable special case where the informed researchers' theory can identify the true causal mechanism with very little data, we characterize the optimal level of dissemination noise and highlight the relevant trade-offs. Dissemination noise is a tool that statistical agencies currently use to protect privacy. We argue this existing practice can be repurposed to screen $p$-hackers and thus improve research credibility.
Date: 2021-03, Revised 2024-03
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published in Proceedings of the National Academy of Sciences 121(21):e2400787121, May 2024
Downloads: (external link)
http://arxiv.org/pdf/2103.09164 Latest version (application/pdf)
Related works:
Journal Article: Screening p -hackers: Dissemination noise as bait (2024) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2103.09164
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().