Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences
Elena Harman and
Tarek Azzam
Evaluation and Program Planning, 2018, vol. 66, issue C, 183-194
Abstract:
This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant’s experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach.
Keywords: Crowdsourcing; Theory-driven evaluation; Qualitative analysis; Stability; Transcript coding; Transcript rating; Mechanical Turk; MTurk (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0149718917301970
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:epplan:v:66:y:2018:i:c:p:183-194
DOI: 10.1016/j.evalprogplan.2017.08.008
Access Statistics for this article
Evaluation and Program Planning is currently edited by Jonathan A. Morell
More articles in Evaluation and Program Planning from Elsevier
Bibliographic data for series maintained by Catherine Liu ().