More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model
Marc Höglinger () and
Ben Jann
No 18, University of Bern Social Sciences Working Papers from University of Bern, Department of Social Sciences
Abstract:
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.
Keywords: Sensitive Questions; Online Survey; Amazon Mechanical Turk; Randomized Response Technique; Crosswise Model; Dice Game; Validation (search for similar items in EconPapers)
JEL-codes: C81 C83 (search for similar items in EconPapers)
Pages: 44 pages
Date: 2016-02-15
New Economics Papers: this item is included in nep-cbe, nep-exp and nep-iue
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://boris.unibe.ch/81526/1/Hoeglinger-Jann-2016-MTurk.pdf working paper (application/pdf)
https://boris.unibe.ch/81526/2/Hoeglinger-Jann-2016-MTurk-Analysis.pdf documentation of analysis (log files) (application/pdf)
Related works:
Journal Article: More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model (2018) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bss:wpaper:18
Access Statistics for this paper
More papers in University of Bern Social Sciences Working Papers from University of Bern, Department of Social Sciences
Bibliographic data for series maintained by Ben Jann ().