Measuring the reliability of subject classification by men and machines
Harold Borko
American Documentation, 1964, vol. 15, issue 4, 268-273
Abstract:
Procedures for measuring the consistency of document classification are described. Three subject specialists classified 997 abstracts of psychological reports into one of eleven categories. These abstracts were also mechanically classified by a computer program using a factor‐score computational procedure. Each abstract was scored for all categories and assigned to the one with the highest score. The three manual classifications were compared with each other and with the mechanical classifications, and a series of contingency coefficients was computed. The average reliability of manual classification procedures was equal to 870. The correlation between automatic and manual classification was .766.
Date: 1964
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/asi.5090150405
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:amedoc:v:15:y:1964:i:4:p:268-273
Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1936-6108
Access Statistics for this article
American Documentation is currently edited by Javed Mostafa
More articles in American Documentation from Wiley Blackwell
Bibliographic data for series maintained by Wiley Content Delivery ().