Performing evaluation studies in information science
Rowena Weiss Swanson and
Joseph Mayer
Journal of the American Society for Information Science, 1975, vol. 26, issue 3, 140-156
Abstract:
This paper considers conceptual and methodological components of information science evaluation studies. The paper discusses the judgmental process of evaluation and the scientific nature of evaluation study in the context of purpose statements; criteria; the selection of variables and data collection and analysis techniques; and requirements of validity, reproducibility and reliability. Industrial value analysis/engineering methodology is described and related to assessments of information products and services. The state‐of‐the‐art of evaluation study in information science is analyzed with respect to 1. the scope of evaluation studies; 2. the use of laboratory‐type environments; 3. the use of surrogate judges; 4. selection of variables; 5. frequency of study; and 6. comparability of study results. Evaluation study is seen as essential to the management of information centers and systems and as having appreciable growth potential.
Date: 1975
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/asi.4630260303
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jamest:v:26:y:1975:i:3:p:140-156
Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1097-4571
Access Statistics for this article
More articles in Journal of the American Society for Information Science from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().