Avoiding Post-Treatment Bias in Audit Experiments
Alexander Coppock
Journal of Experimental Political Science, 2019, vol. 6, issue 1, 1-4
Abstract:
Audit experiments are used to measure discrimination in a large number of domains (Employment: Bertrand et al. (2004); Legislator responsiveness: Butler et al. (2011); Housing: Fang et al. (2018)). Audit studies all have in common that they estimate the average difference in response rates depending on randomly varied characteristics (such as the race or gender) of a requester. Scholars conducting audit experiments often seek to extend their analyses beyond the effect on response to the effects on the quality of the response. Response is a consequence of treatment; answering these important questions well is complicated by post-treatment bias (Montgomery et al., 2018). In this note, I consider a common form of post-treatment bias that occurs in audit experiments.
Date: 2019
References: Add references at CitEc
Citations: View citations in EconPapers (6)
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:jexpos:v:6:y:2019:i:01:p:1-4_00
Access Statistics for this article
More articles in Journal of Experimental Political Science from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().