Evaluating interactive systems in TREC
Micheline Beaulieu,
Stephen Robertson and
Edie Rasmussen
Journal of the American Society for Information Science, 1996, vol. 47, issue 1, 85-94
Abstract:
The TREC (Text REtrieval Conference) experiments were designed to allow large‐scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an “interactive tract” within TREC to accommodate user interaction has required some modifications in the way the retrieval tasks is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow‐up to TREC searches which attempted to isolate the human and automatic contributions to query formulation and retrieval performance. © 1996 John Wiley & Sons, Inc.
Date: 1996
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://doi.org/10.1002/(SICI)1097-4571(199601)47:13.0.CO;2-Z
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jamest:v:47:y:1996:i:1:p:85-94
Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1097-4571
Access Statistics for this article
More articles in Journal of the American Society for Information Science from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().