MF‐Re‐Rank: A modality feature‐based Re‐Ranking model for medical image retrieval
Hajer Ayadi,
Mouna Torjmen‐Khemakhem,
Mariam Daoud,
Jimmy Xiangji Huang and
Maher Ben Jemaa
Journal of the Association for Information Science & Technology, 2018, vol. 69, issue 9, 1095-1108
Abstract:
One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition.
Date: 2018
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/asi.24045
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jinfst:v:69:y:2018:i:9:p:1095-1108
Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=2330-1635
Access Statistics for this article
More articles in Journal of the Association for Information Science & Technology from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().