Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing
Xiaohan Yang,
Zhen Wang,
Nannan Wu,
Guokun Li,
Chuang Feng and
Pingping Liu
Additional contact information
Xiaohan Yang: School of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China
Zhen Wang: School of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China
Nannan Wu: School of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China
Guokun Li: School of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China
Chuang Feng: School of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China
Pingping Liu: Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
Mathematics, 2022, vol. 10, issue 15, 1-17
Abstract:
The image-text cross-modal retrieval task, which aims to retrieve the relevant image from text and vice versa, is now attracting widespread attention. To quickly respond to the large-scale task, we propose an Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing (DRNPH) to achieve cross-modal retrieval in the common Hamming space, which has the advantages of storage and efficiency. To fulfill the nearest neighbor search in the Hamming space, we demand to reconstruct both the original intra- and inter-modal neighbor matrix according to the binary feature vectors. Thus, we can compute the neighbor relationship among different modal samples directly based on the Hamming distances. Furthermore, the cross-modal pair-wise similarity preserving constraint requires the similar sample pair have an identical Hamming distance to the anchor. Therefore, the similar sample pairs own the same binary code, and they have minimal Hamming distances. Unfortunately, the pair-wise similarity preserving constraint may lead to an imbalanced code problem. Therefore, we propose the cross-modal triplet relative similarity preserving constraint, which demands the Hamming distances of similar pairs should be less than those of dissimilar pairs to distinguish the samples’ ranking orders in the retrieval results. Moreover, a large similarity marginal can boost the algorithm’s noise robustness. We conduct the cross-modal retrieval comparative experiments and ablation study on two public datasets, MIRFlickr and NUS-WIDE, respectively. The experimental results show that DRNPH outperforms the state-of-the-art approaches in various image-text retrieval scenarios, and all three proposed constraints are necessary and effective for boosting cross-modal retrieval performance.
Keywords: cross-modal retrieval; image-text retrieval; cross-modal similarity preserving; hashing algorithm; unsupervised learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/15/2644/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/15/2644/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:15:p:2644-:d:874042
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().