EconPapers    
Economics at your fingertips  
 

Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing

Edison M. Choe, Justin L. Kern and Hua-Hua Chang
Additional contact information
Edison M. Choe: Graduate Management Admission Council
Justin L. Kern: University of California, Merced
Hua-Hua Chang: University of Illinois at Urbana-Champaign

Journal of Educational and Behavioral Statistics, 2018, vol. 43, issue 2, 135-158

Abstract: Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response time (RT), which was shown to effectively reduce the average completion time for a fixed-length test with minimal decrease in the accuracy of ability estimation. As this method also resulted in extremely unbalanced exposure of items, however, a -stratification with b -blocking was recommended as a means for counterbalancing. Although exceptionally effective in this regard, it comes at substantial costs of attenuating the reduction in average testing time, increasing the variance of testing times, and further decreasing estimation accuracy. Therefore, this article investigated several alternative methods for item exposure control, of which the most promising was a simple modification of maximizing Fisher information per unit of centered expected RT. The key advantage of the proposed method is the flexibility in choosing a centering value according to a desired distribution of testing times and level of exposure control. Moreover, the centered expected RT can be exponentially weighted to calibrate the degree of measurement precision. The results of extensive simulations, with item pools and examinees that are both simulated and real, demonstrate that optimally chosen centering and weighting values can markedly reduce the mean and variance of both testing times and test overlap, all without much compromise in estimation accuracy.

Keywords: computerized adaptive testing; response time; item selection; item exposure; test overlap (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://journals.sagepub.com/doi/10.3102/1076998617723642 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:jedbes:v:43:y:2018:i:2:p:135-158

DOI: 10.3102/1076998617723642

Access Statistics for this article

More articles in Journal of Educational and Behavioral Statistics
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:jedbes:v:43:y:2018:i:2:p:135-158