Unsupervised Person Re-Identification via Deep Attribute Learning
Shun Zhang (),
Yaohui Xu,
Xuebin Zhang,
Boyang Cheng and
Ke Wang
Additional contact information
Shun Zhang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
Yaohui Xu: School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
Xuebin Zhang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
Boyang Cheng: School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
Ke Wang: China Railway First Survey and Design Institute Group Co., Ltd., Xi’an 710043, China
Future Internet, 2025, vol. 17, issue 8, 1-24
Abstract:
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6% and 3.91% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively.
Keywords: person re-identification; attribute learning; self-training; unsupervised domain adaptation (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1999-5903/17/8/371/pdf (application/pdf)
https://www.mdpi.com/1999-5903/17/8/371/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:17:y:2025:i:8:p:371-:d:1725511
Access Statistics for this article
Future Internet is currently edited by Ms. Grace You
More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().