Item Response Theory Models Applied to Data Allowing Examinee Choice
Eric T. Bradlow and
Neal Thomas
Journal of Educational and Behavioral Statistics, 1998, vol. 23, issue 3, 236-243
Abstract:
Examinations that permit students to choose a subset of the items are popular despite the potential that students may take examinations of varying difficulty as a result of their choices. We provide a set of conditions for the validity of inference for Item Response Theory (IRT) models applied to data collected from choice-based examinations. Valid likelihood and Bayesian inference using standard estimation methods require (except in extraordinary circumstances) that there is no dependence, after conditioning on the observed item responses, between the examinees choices and their (potential but unobserved) responses to omitted items, as well as their latent abilities. These independence assumptions are typical of those required in much more general settings. Common low-dimensional IRT models estimated by standard methods, though potentially useful tools for educational data, do not resolve the difficult problems posed by choice-based data.
Keywords: Keywords: examinee choice; item response theory; missing data (search for similar items in EconPapers)
Date: 1998
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://journals.sagepub.com/doi/10.3102/10769986023003236 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:jedbes:v:23:y:1998:i:3:p:236-243
DOI: 10.3102/10769986023003236
Access Statistics for this article
More articles in Journal of Educational and Behavioral Statistics
Bibliographic data for series maintained by SAGE Publications ().