Developing an Online Synthetic Validation Tool
Rodney A. McCloy,
Dan J. Putka and
Robert E. Gibby
Industrial and Organizational Psychology, 2010, vol. 3, issue 3, 366-370
Abstract:
In their comprehensive apologetic treatment of synthetic validity, Johnson et al. (2010) echo Hough (2001), advocating development of a central synthetic validation database, which would serve as a repository of validity information to support future synthetic validation efforts. They offer two potential approaches for developing such a database. The first entails the conduct of “a large-scale study in which tests are administered to and performance ratings gathered on incumbents in a large number of jobs in a variety of organizations.” The authors consider this approach to be “ideal but impractical,” largely because of the scope and cost of the data collection required and the resulting investment required by any sponsoring organization. The second entails the conduct of multiple local studies to generate empirical estimates of relationships between measures of various predictor constructs and a standardized set of job components. Johnson et al. consider this approach more practical, citing Meyer, Dalal, and Bonaccio (2009) as a benchmark example.
Date: 2010
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:inorps:v:3:y:2010:i:03:p:366-370_00
Access Statistics for this article
More articles in Industrial and Organizational Psychology from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().