How Dropping Subjects Who Failed Manipulation Checks Can Bias Your Results: An Illustrative Case
Simon Varaine
Journal of Experimental Political Science, 2023, vol. 10, issue 2, 299-305
Abstract:
Manipulations checks are postexperimental measures widely used to verify that subjects understood the treatment. Some researchers drop subjects who failed manipulation checks in order to limit the analyses to attentive subjects. This short report offers a novel illustration on how this practice may bias experimental results: in the present case, through confirming a hypothesis that is likely false. In a survey experiment, subjects were primed with a fictional news story depicting an economic decline versus prosperity. Subjects were then asked whether the news story depicted an economic decline or prosperity. Results indicate that responses to this manipulation check captured subjects’ preexisting beliefs about the economic situation. As a consequence, dropping subjects who failed the manipulation check mixes the effects of preexisting and induced beliefs, increasing the risk of false positive findings. Researchers should avoid dropping subjects based on posttreatment measures and rely on pretreatment measures of attentiveness.
Date: 2023
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:jexpos:v:10:y:2023:i:2:p:299-305_12
Access Statistics for this article
More articles in Journal of Experimental Political Science from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().