Selection in Surveys: Using Randomized Incentives to Detect and Account for Nonresponse Bias
Deniz Dutz,
Ingrid Huitfeldt,
Santiago Lacouture,
Magne Mogstad,
Alexander Torgovitsky and
Winnie van Dijk
Additional contact information
Deniz Dutz: University of Chicago
Ingrid Huitfeldt: BI Norwegian Business School, and Statistics Norway
Santiago Lacouture: University of Chicago
Magne Mogstad: University of Chicago, Statistics Norway, and NBER
Alexander Torgovitsky: University of Chicago
Winnie van Dijk: Yale University
No 2451, Cowles Foundation Discussion Papers from Cowles Foundation for Research in Economics, Yale University
Abstract:
We show how to use randomized participation incentives to test and account for nonresponse bias in surveys. We first use data from a survey about labor market conditions, linked to full-population administrative data, to provide evidence of large differences in labor market outcomes between survey participants and nonparticipants, differences which would not be observable to an analyst who only has access to the survey data. These differences persist even after correcting for observable characteristics. We then use the randomized incentives in our survey to directly test for nonresponse bias, and find evidence of substantial bias. Next, we apply a range of existing methods that account for nonresponse bias and find they produce bounds (or point estimates) that are either wide or far from the ground truth. We investigate the failure of these methods by taking a closer look at the determinants of participation, finding that the composition of participants changes in opposite directions in response to incentives and reminder emails. We develop a model of participation that allows for two dimensions of unobserved heterogeneity in the participation decision. Applying the model to our data produces bounds (or point estimates) that are narrower and closer to the ground truth than the other methods. Our results highlight the benefits of including randomized participation incentives in surveys. Both the testing procedure and the methods for bias adjustment may be attractive tools for researchers who are able to embed randomized incentives into their survey.
Pages: 55 pages
Date: 2025-07-14
References: Add references at CitEc
Citations:
Downloads: (external link)
https://cowles.yale.edu/sites/default/files/2025-08/d2451.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cwl:cwldpp:2451
Ordering information: This working paper can be ordered from
Cowles Foundation, Yale University, Box 208281, New Haven, CT 06520-8281 USA
The price is None.
Access Statistics for this paper
More papers in Cowles Foundation Discussion Papers from Cowles Foundation for Research in Economics, Yale University Yale University, Box 208281, New Haven, CT 06520-8281 USA. Contact information at EDIRC.
Bibliographic data for series maintained by Brittany Ladd ().