Have Decreases in Air Pollution Reduced Mortality Risks in the United States?
Louis Anthony Cox
Additional contact information
Louis Anthony Cox: Cox Associates and University of Colorado
Chapter Chapter 17 in Quantitative Risk Analysis of Air Pollution Health Effects, 2021, pp 475-505 from Springer
Abstract:
Abstract An aim of applied science in general and of epidemiology in particular is to draw sound causal inferences from observations. For public health policy analysts and epidemiologists, this includes drawing inferences about whether historical changes in exposures have actually caused the consequences predicted for, or attributed to, them. The example of the Dublin coal-burning ban mentioned in Chaps. 15 and 16 suggests that accurate evaluation of the effect of interventions is not always easy, even when data are plentiful. Students are taught to develop hypotheses about causal relations, devise testable implications of these causal hypotheses, carry out the tests, and objectively report and learn from the results to refute or refine the initial hypotheses. For at least the past two decades, however, epidemiologists and commentators on scientific methods and results have raised concerns that current practices too often lead to false-positive findings and to mistaken attributions of causality to mere statistical associations (Lehrer 2012; Sarewitz 2012; Ottenbacher 1998; Imberger et al. 2011). Formal training in epidemiology may be a mixed blessing in addressing these concerns. As discussed in Appendix C of Chap. 9 , concepts such as “attributable risk,” “population attributable fraction,” “burden of disease,” “etiologic fraction,” and even “probability of causation” are solidly based on relative risks and related measures of statistical association; they do not necessarily reveal anything about predictive, manipulative, structural, or explanatory (mechanistic) causation (Greenland and Brumback 2002). Limitations of human judgment and inference, such as confirmation bias (finding what we expect to find), motivated reasoning (concluding what it pays us to conclude), and overconfidence (mistakenly believing that our own beliefs are more accurate than they really are), do not spare health effects investigators. Experts in the health effects of particular compounds are not always also experts in causal analysis, and published causal conclusions are often unwarranted, with a pronounced bias toward finding “significant” effects where none actually exists (false positives) (Lehrer 2012; Sarewitz 2012; Ioannidis 2005; The Economist 2013).
Date: 2021
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:isochp:978-3-030-57358-4_17
Ordering information: This item can be ordered from
http://www.springer.com/9783030573584
DOI: 10.1007/978-3-030-57358-4_17
Access Statistics for this chapter
More chapters in International Series in Operations Research & Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().