EconPapers    
Economics at your fingertips  
 

Effects of Non-Differential Exposure Misclassification on False Conclusions in Hypothesis-Generating Studies

Igor Burstyn, Yunwen Yang and A. Robert Schnatter
Additional contact information
Igor Burstyn: Department of Environmental and Occupational Health, School of Public Health, Drexel University, Nesbitt Hall, 3215 Market Street, PA 19104, USA
Yunwen Yang: Department of Epidemiology and Biostatistics, School of Public Health, Drexel University, Nesbitt Hall, 3215 Market Street, PA 19104, USA
A. Robert Schnatter: Occupational and Public Health Division, ExxonMobil Biomedical Sciences Inc., 1545 U.S. Highway 22 East, Annandale, NJ 08801, USA

IJERPH, 2014, vol. 11, issue 10, 1-16

Abstract: Despite the theoretical success of obviating the need for hypothesis-generating studies, they live on in epidemiological practice. Cole asserted that “… there is boundless number of hypotheses that could be generated, nearly all of them wrong” and urged us to focus on evaluating “credibility of hypothesis”. Adopting a Bayesian approach, we put this elegant logic into quantitative terms at the study planning stage for studies where the prior belief in the null hypothesis is high ( i.e. , “hypothesis-generating” studies). We consider not only type I and II errors (as is customary) but also the probabilities of false positive and negative results, taking into account typical imperfections in the data. We concentrate on a common source of imperfection in the data: non-differential misclassification of binary exposure classifier. In context of an unmatched case-control study, we demonstrate—both theoretically and via simulations—that although non-differential exposure misclassification is expected to attenuate real effect estimates, leading to the loss of ability to detect true effects, there is also a concurrent increase in false positives. Unfortunately, most investigators interpret their findings from such work as being biased towards the null rather than considering that they are no less likely to be false signals. The likelihood of false positives dwarfed the false negative rate under a wide range of studied settings. We suggest that instead of investing energy into understanding credibility of dubious hypotheses, applied disciplines such as epidemiology, should instead focus attention on understanding consequences of pursuing specific hypotheses, while accounting for the probability that the observed “statistically significant” association may be qualitatively spurious.

Keywords: false positive; false negative; Monte-Carlo simulation; study design; ase-control studies; measurement error; exposure misclassification; Bayesian; hypothesis-testing; power (search for similar items in EconPapers)
JEL-codes: I I1 I3 Q Q5 (search for similar items in EconPapers)
Date: 2014
References: View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.mdpi.com/1660-4601/11/10/10951/pdf (application/pdf)
https://www.mdpi.com/1660-4601/11/10/10951/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jijerp:v:11:y:2014:i:10:p:10951-10966:d:41433

Access Statistics for this article

IJERPH is currently edited by Ms. Jenna Liu

More articles in IJERPH from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jijerp:v:11:y:2014:i:10:p:10951-10966:d:41433