When information retrieval measures agree about the relative quality of document rankings
Robert M. Losee
Journal of the American Society for Information Science, 2000, vol. 51, issue 9, 834-840
Abstract:
The variety of performance measures available for information retrieval systems, search engines, and network filtering agents can be confusing to both practitioners and scholars. Most discussions about these measures address their theoretical foundations and the characteristics of a measure that make it desirable for a particular application. In this work, we consider how measures of performance at a point in a search may be formally compared. Criteria are developed that allow one to determine the percent of time or conditions under which two different performance measures suggest that one document ordering is superior to another ordering, or when the two measures disagree about the relative value of document orderings. As an example, graphs provide illustrations of the relationships between precision and F.
Date: 2000
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://doi.org/10.1002/(SICI)1097-4571(2000)51:93.0.CO;2-1
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jamest:v:51:y:2000:i:9:p:834-840
Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1097-4571
Access Statistics for this article
More articles in Journal of the American Society for Information Science from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().