Comparing Performance of Methods to Deal With Differential Attrition in Randomized Experimental Evaluations
Kaitlin Anderson,
Gema Zamarro,
Jennifer Steele and
Trey Miller
Evaluation Review, 2021, vol. 45, issue 1-2, 70-104
Abstract:
Background: In randomized controlled trials, attrition rates often differ by treatment status, jeopardizing causal inference. Inverse probability weighting methods and estimation of treatment effect bounds have been used to adjust for this bias. Objectives: We compare the performance of various methods within two samples, both generated through lottery-based randomization: one with considerable differential attrition and an augmented dataset with less problematic attrition. Research Design: We assess the performance of various correction methods within the dataset with problematic attrition. In addition, we conduct simulation analyses. Results: Within the more problematic dataset, we find the correction methods often performed poorly. Simulation analyses indicate that deviations from the underlying assumptions for bounding approaches damage the performance of estimated bounds. Conclusions: We recommend the verification of the underlying assumptions in attrition correction methods whenever possible and, when verification is not possible, using these methods with caution.
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0193841X211034363 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:evarev:v:45:y:2021:i:1-2:p:70-104
DOI: 10.1177/0193841X211034363
Access Statistics for this article
More articles in Evaluation Review
Bibliographic data for series maintained by SAGE Publications ().