Who Is the Culprit? A Commentary on Moderator Detection
Hannah M. Markell and
Jose M. Cortina
Industrial and Organizational Psychology, 2017, vol. 10, issue 3, 465-467
Abstract:
Over the years, many in the field of organizational psychology have claimed that meta-analytic tests for moderators provide evidence for validity generalization (Schmidt & Hunter, 1977), a term first used in the middle of the last century (Mosier, 1950). In response, Tett, Hundley, and Christiansen (2017) advise caution when it comes to our inclination toward generalizing findings across workplaces/domains and urge precision in attaching meaning to the statistic we are generalizing. Their focal article was insightful and offers important recommendations for researchers regarding certain statistical indicators of unexplained variability, such as SDρ. In this commentary, we would like to make a different point about SDρ, namely that it, and other statistics based on residual variance, will be deflated due to the lack of variance in moderators. It is this lack of between-study variance, as much as anything else, that leads to misguided conclusions about validity generalization.
Date: 2017
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:inorps:v:10:y:2017:i:03:p:465-467_00
Access Statistics for this article
More articles in Industrial and Organizational Psychology from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().