Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons
Thomas D. Cook,
William R. Shadish and
Vivian C. Wong
Additional contact information
Thomas D. Cook: Professor of Sociology, Northwestern University, Postal: Professor of Sociology, Northwestern University
William R. Shadish: Professor of Psychology, University of California, Merced, Postal: Professor of Psychology, University of California, Merced
Vivian C. Wong: Northwestern University, Postal: Northwestern University
Journal of Policy Analysis and Management, 2008, vol. 27, issue 4, 724-750
Abstract:
This paper analyzes 12 recent within-study comparisons contrasting causal estimates from a randomized experiment with those from an observational study sharing the same treatment group. The aim is to test whether different causal estimates result when a counterfactual group is formed, either with or without random assignment, and when statistical adjustments for selection are made in the group from which random assignment is absent. We identify three studies comparing experiments and regression-discontinuity (RD) studies. They produce quite comparable causal estimates at points around the RD cutoff. We identify three other studies where the quasi-experiment involves careful intact group matching on the pretest. Despite the logical possibility of hidden bias in this instance, all three cases also reproduce their experimental estimates, especially if the match is geographically local. We then identify two studies where the treatment and nonrandomized comparison groups manifestly differ at pretest but where the selection process into treatment is completely or very plausibly known. Here too, experimental results are recreated. Two of the remaining studies result in correspondent experimental and nonexperimental results under some circumstances but not others, while two others produce different experimental and nonexperimental estimates, though in each case the observational study was poorly designed and analyzed. Such evidence is more promising than what was achieved in past within-study comparisons, most involving job training. Reasons for this difference are discussed. © 2008 by the Association for Public Policy Analysis and Management.
Date: 2008
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (61)
Downloads: (external link)
http://hdl.handle.net/10.1002/pam.20375 Link to full text; subscription required (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wly:jpamgt:v:27:y:2008:i:4:p:724-750
DOI: 10.1002/pam.20375
Access Statistics for this article
More articles in Journal of Policy Analysis and Management from John Wiley & Sons, Ltd.
Bibliographic data for series maintained by Wiley Content Delivery ().