A Note on Dropping Experimental Subjects who Fail a Manipulation Check
Peter M. Aronow,
Jonathon Baron and
Lauren Pinson
Political Analysis, 2019, vol. 27, issue 4, 572-589
Abstract:
Dropping subjects based on the results of a manipulation check following treatment assignment is common practice across the social sciences, presumably to restrict estimates to a subpopulation of subjects who understand the experimental prompt. We show that this practice can lead to serious bias and argue for a focus on what is revealed without discarding subjects. Generalizing results developed in Zhang and Rubin (2003) and Lee (2009) to the case of multiple treatments, we provide sharp bounds for potential outcomes among those who would pass a manipulation check regardless of treatment assignment. These bounds may have large or infinite width, implying that this inferential target is often out of reach. As an application, we replicate Press, Sagan, and Valentino (2013) with a design that does not drop subjects that failed the manipulation check and show that the findings are likely stronger than originally reported. We conclude with suggestions for practice, namely alterations to the experimental design.
Date: 2019
References: Add references at CitEc
Citations: View citations in EconPapers (16)
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:polals:v:27:y:2019:i:04:p:572-589_00
Access Statistics for this article
More articles in Political Analysis from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().