Evaluation Analytics for Public Health: Has Reducing Air Pollution Reduced Death Rates in the United States?
Louis Anthony Cox,
Douglas A. Popken and
Richard X. Sun
Additional contact information
Louis Anthony Cox: Cox Associates
Douglas A. Popken: Cox Associates
Richard X. Sun: Cox Associates
Chapter Chapter 10 in Causal Analytics for Applied Risk Analysis, 2018, pp 417-442 from Springer
Abstract:
Abstract An aim of applied science in general, and of epidemiology in particular, is to draw sound causal inferences from observations. For public health policy analysts and epidemiologists, this includes drawing inferences about whether historical changes in exposures have actually caused the consequences predicted for, or attributed to, them. The example of the Dublin coal-burning ban introduced in Chap. 1 suggests that accurate evaluation of the effect of interventions is not always easy, even when data are plentiful. Students are taught to develop hypotheses about causal relations, devise testable implications of these causal hypotheses, carry out the tests, and objectively report and learn from the results to refute or refine the initial hypotheses. For at least the past two decades, however, epidemiologists and commentators on scientific methods and results have raised concerns that current practices too often lead to false-positive findings and to mistaken attributions of causality to mere statistical associations (Lehrer 2012; Sarewitz 2012; Ottenbacher 1998; Imberger et al. 2011). Formal training in epidemiology may be a mixed blessing in addressing these concerns. As discussed in Chap. 2 , concepts such as “attributable risk,” “population attributable fraction,” “burden of disease,” “etiologic fraction,” and even “probability of causation” are solidly based on relative risks and related measures of statistical association; they do not necessarily reveal anything about predictive, manipulative, structural, or explanatory (mechanistic) causation (e.g., Cox 2013; Greenland and Brumback 2002). Limitations of human judgment and inference, such as confirmation bias (finding what we expect to find), motivated reasoning (concluding what it pays us to conclude), and overconfidence (mistakenly believing that our own beliefs are more accurate than they really are), do not spare health effects investigators. Experts in the health effects of particular compounds are not always also experts in causal analysis, and published causal conclusions are often unwarranted, as reviewed in Chap. 2 , with a pronounced bias toward finding “significant” effects where none actually exists (false positives) (Lehrer 2012; Sarewitz 2012; Ioannidis 2005; The Economist 2013).
Date: 2018
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:isochp:978-3-319-78242-3_10
Ordering information: This item can be ordered from
http://www.springer.com/9783319782423
DOI: 10.1007/978-3-319-78242-3_10
Access Statistics for this chapter
More chapters in International Series in Operations Research & Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().