EconPapers    
Economics at your fingertips  
 

What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?

Vivian C. Wong, Peter M. Steiner and Kylie L. Anglin

Evaluation Review, 2018, vol. 42, issue 2, 147-175

Abstract: Given the widespread use of nonexperimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the same target population. The goal is to assess whether the stringent assumptions required for NE methods are likely to be met in practice. This essay provides an overview of recent efforts to empirically evaluate NE method performance in field settings. We discuss a brief history of the design, highlighting methodological innovations along the way. We also describe papers that are included in this two-volume special issue on WSC approaches and suggest future areas for consideration in the design, implementation, and analysis of WSCs.

Keywords: within-study comparison; causal inference; program evaluation; nonexperiments (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0193841X18776870 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:evarev:v:42:y:2018:i:2:p:147-175

DOI: 10.1177/0193841X18776870

Access Statistics for this article

More articles in Evaluation Review
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:evarev:v:42:y:2018:i:2:p:147-175