EconPapers    
Economics at your fingertips  
 

Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models

Martin Storme, Nils Myszkowski, Simon Baron and David Bernard
Additional contact information
Nils Myszkowski: Department of Psychology, Pace University
Simon Baron: Assess First
David Bernard: Assess First

Post-Print from HAL

Abstract: Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N=2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.

Keywords: E-assessment; general mental ability; nested logit models; item-response theory; ability-based guessing (search for similar items in EconPapers)
Date: 2019-09
References: Add references at CitEc
Citations:

Published in Journal of Intelligence, 2019, 7 (3), pp.17. ⟨10.3390/jintelligence7030017⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03001692

DOI: 10.3390/jintelligence7030017

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-31
Handle: RePEc:hal:journl:hal-03001692