EconPapers    
Economics at your fingertips  
 

Sifting Statistical Significance From the Artifact of Regression- Discontinuity Design

T. Stanley and Ann Robinson
Additional contact information
Ann Robinson: University of Arkansas at Little Rock

Evaluation Review, 1990, vol. 14, issue 2, 166-181

Abstract: When the covariate of an evaluation study using regression analysis is falhbly measured, the statistical test of program effectiveness is biased. In programs where the target population is "below average, " the bias tends to suppress the beneficial effects of the program, while raising them in programs designed for those "above average. " The magnitude of this bias is calculated, and a correction method is derived and illustrated. In many cases, these biases will make practical differences in program assessment, lending further support to the general distrust of nonexperimental methods. Nonetheless, our approach improves the reliability of nonexperimen tal methods by correcting one potential source of their bias.

Date: 1990
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0193841X9001400204 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:evarev:v:14:y:1990:i:2:p:166-181

DOI: 10.1177/0193841X9001400204

Access Statistics for this article

More articles in Evaluation Review
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-04-07
Handle: RePEc:sae:evarev:v:14:y:1990:i:2:p:166-181