DIF Statistical Inference Without Knowing Anchoring Items
Yunxiao Chen (),
Chengcheng Li (),
Jing Ouyang () and
Gongjun Xu ()
Additional contact information
Yunxiao Chen: London School of Economics and Political Science
Chengcheng Li: University of Michigan
Jing Ouyang: University of Michigan
Gongjun Xu: University of Michigan
Psychometrika, 2023, vol. 88, issue 4, No 1, 1097-1122
Abstract:
Abstract Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also on the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require knowing several anchor items that are DIF-free in order to draw inferences on whether each of the rest is a DIF item, where the anchor items are used to identify the latent trait distributions. When no prior information on anchor items is available, or some anchor items are misspecified, item purification methods and regularized estimation methods can be used. The former iteratively purifies the anchor set by a stepwise model selection procedure, and the latter selects the DIF-free items by a LASSO-type regularization approach. Unfortunately, unlike the methods based on a correctly specified anchor set, these methods are not guaranteed to provide valid statistical inference (e.g., confidence intervals and p-values). In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal $$L_1$$ L 1 norm condition for identifying the latent trait distributions. Without requiring prior knowledge about an anchor set, it can accurately estimate the DIF effects of individual items and further draw valid statistical inferences for quantifying the uncertainty. Specifically, the inference results allow us to control the type-I error for DIF detection, which may not be possible with item purification and regularized estimation methods. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is applied to analysing the three personality scales of the Eysenck personality questionnaire-revised (EPQ-R).
Keywords: differential item functioning; measurement invariance; item response theory; least absolute deviations; confidence interval (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1007/s11336-023-09930-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:psycho:v:88:y:2023:i:4:d:10.1007_s11336-023-09930-9
Ordering information: This journal article can be ordered from
http://www.springer. ... gy/journal/11336/PS2
DOI: 10.1007/s11336-023-09930-9
Access Statistics for this article
Psychometrika is currently edited by Irini Moustaki
More articles in Psychometrika from Springer, The Psychometric Society
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().