Measuring Trust in Medical Researchers: Adding Insights from Cognitive Interviews to Examine Agree-Disagree and Construct-Specific Survey Questions
Dykema Jennifer (),
Garbarski Dana (),
Wall Ian F. () and
Edwards Dorothy Farrar ()
Additional contact information
Dykema Jennifer: University of Wisconsin Survey Center (UWSC), 4308 Sterling Hall, 475 N. Charter St. Madison, WI 53706, U.S.A.
Garbarski Dana: Loyola University Chicago, Coffey Hall 440, 1032 W. Sheridan Rd. Chicago, IL 60660, U.S.A.
Wall Ian F.: Steelcase, 901 44th Street SE, Grand Rapids, MI, 49508, U.S.A.
Edwards Dorothy Farrar: University of Wisconsin-Madison, 2176 Medical Science Center, 1300 University Avenue Madison, WI 53706, U.S.A.
Journal of Official Statistics, 2019, vol. 35, issue 2, 353-386
Abstract:
While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD.
Keywords: Agree-disagree questions; questionnaire design; cognitive interviewing; response processes; data quality; construct-specific questions (search for similar items in EconPapers)
Date: 2019
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://doi.org/10.2478/jos-2019-0017 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:vrs:offsta:v:35:y:2019:i:2:p:353-386:n:4
DOI: 10.2478/jos-2019-0017
Access Statistics for this article
Journal of Official Statistics is currently edited by Annica Isaksson and Ingegerd Jansson
More articles in Journal of Official Statistics from Sciendo
Bibliographic data for series maintained by Peter Golla ().