Conducting Clinical Research Using Crowdsourced Convenience Samples
Jesse Chandler and
Danielle Shapiro
Mathematica Policy Research Reports from Mathematica Policy Research
Abstract:
This article summarizes what is known about Amazon’s Mechanical Turk sample composition and data quality. It addresses methodological issues with using Mechanical Turk and suggests concrete steps to avoid these issues or minimize their impact.
Keywords: Amazon Mechanical Turk; Internet research methods; convenience sample; crowdsourcing (search for similar items in EconPapers)
New Economics Papers: this item is included in nep-pay
References: Add references at CitEc
Citations: View citations in EconPapers (11)
Downloads: (external link)
http://www.annualreviews.org/doi/full/10.1146/annurev-clinpsy-021815-093623 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:mpr:mprres:c9ae2ea1c9b249deadb0c7c0d910fa42
Access Statistics for this paper
More papers in Mathematica Policy Research Reports from Mathematica Policy Research Mathematica Policy Research P.O. Box 2393 Princeton, NJ 08543-2393 Attn: Communications. Contact information at EDIRC.
Bibliographic data for series maintained by Joanne Pfleiderer () and Cindy George ( this e-mail address is bad, please contact ).