Computerized Adaptive Testing for Public Opinion Surveys
Jacob M. Montgomery and
Josh Cutler
Political Analysis, 2013, vol. 21, issue 2, 172-192
Abstract:
Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up nonresponse rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this article, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals' previous answers to select subsequent questions that most efficiently reveal respondents' positions on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and empirically comparing dynamic and static measures of political knowledge.
Date: 2013
References: Add references at CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:polals:v:21:y:2013:i:02:p:172-192_01
Access Statistics for this article
More articles in Political Analysis from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().