Crowdsourcing for quantifying transcripts: An exploratory study
Tarek Azzam and
Elena Harman
Evaluation and Program Planning, 2016, vol. 54, issue C, 63-73
Abstract:
This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews.
Keywords: Crowdsourcing; Qualitative analysis; Stability; Transcript coding; Transcript rating; Mechanical Turk; MTurk (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0149718915001044
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:epplan:v:54:y:2016:i:c:p:63-73
DOI: 10.1016/j.evalprogplan.2015.09.002
Access Statistics for this article
Evaluation and Program Planning is currently edited by Jonathan A. Morell
More articles in Evaluation and Program Planning from Elsevier
Bibliographic data for series maintained by Catherine Liu ().