Assessing Inference Methods
Bruno Ferman
Papers from arXiv.org
Abstract:
We analyze different types of simulations that applied researchers can use to assess whether their inference methods reliably control false-positive rates. We show that different assessments involve trade-offs, varying in the types of problems they may detect, finite-sample performance, susceptibility to sequential-testing distortions, susceptibility to cherry-picking, and implementation complexity. We also show that a commonly used simulation to assess inference methods in shift-share designs can lead to misleading conclusions and propose alternatives. Overall, we provide novel insights and recommendations for applied researchers on how to choose, implement, and interpret inference assessments in their empirical applications.
Date: 2019-12, Revised 2025-10
New Economics Papers: this item is included in nep-ecm
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (6)
Downloads: (external link)
http://arxiv.org/pdf/1912.08772 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1912.08772
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().