Stemming algorithms: A case study for detailed evaluation
David A. Hull
Journal of the American Society for Information Science, 1996, vol. 47, issue 1, 70-84
Abstract:
The majority of information retrieval experiments are evaluated by measures such as average precision and average recall. Fundamental decisions about the superiority of one retrieval technique over another are made solely on the basis of these measures. We claim that average performance figures need to be validated with a careful statistical analysis and that there is a great deal of additional information that can be uncovered by looking closely at the results of individual queries. This article is a case study of stemming algorithms which describes a number of novel approaches to evaluation and demonstrates their value. © 1996 John Wiley & Sons, Inc.
Date: 1996
References: Add references at CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
https://doi.org/10.1002/(SICI)1097-4571(199601)47:13.0.CO;2-#
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jamest:v:47:y:1996:i:1:p:70-84
Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1097-4571
Access Statistics for this article
More articles in Journal of the American Society for Information Science from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().