EconPapers    
Economics at your fingertips  
 

A Noisy Sample Selection Framework Based on a Mixup Loss and Recalibration Strategy

Qian Zhang (), Yu De, Xinru Zhou, Hanmeng Gong, Zheng Li, Yiming Liu and Ruirui Shao
Additional contact information
Qian Zhang: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Yu De: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Xinru Zhou: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Hanmeng Gong: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Zheng Li: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Yiming Liu: School of Information Technology, Jiangsu Open University, Nanjing 210036, China
Ruirui Shao: School of Information Technology, Jiangsu Open University, Nanjing 210036, China

Mathematics, 2024, vol. 12, issue 15, 1-22

Abstract: Deep neural networks (DNNs) have achieved breakthrough progress in various fields, largely owing to the support of large-scale datasets with manually annotated labels. However, obtaining such datasets is costly and time-consuming, making high-quality annotation a challenging task. In this work, we propose an improved noisy sample selection method, termed “sample selection framework”, based on a mixup loss and recalibration strategy (SMR). This framework enhances the robustness and generalization abilities of models. First, we introduce a robust mixup loss function to pre-train two models with identical structures separately. This approach avoids additional hyperparameter adjustments and reduces the need for prior knowledge of noise types. Additionally, we use a Gaussian Mixture Model (GMM) to divide the entire training set into labeled and unlabeled subsets, followed by robust training using semi-supervised learning (SSL) techniques. Furthermore, we propose a recalibration strategy based on cross-entropy (CE) loss to prevent the models from converging to local optima during the SSL process, thus further improving performance. Ablation experiments on CIFAR-10 with 50% symmetric noise and 40% asymmetric noise demonstrate that the two modules introduced in this paper improve the accuracy of the baseline (i.e., DivideMix) by 1.5% and 0.5%, respectively. Moreover, the experimental results on multiple benchmark datasets demonstrate that our proposed method effectively mitigates the impact of noisy labels and significantly enhances the performance of DNNs on noisy datasets. For instance, on the WebVision dataset, our method improves the top-1 accuracy by 0.7% and 2.4% compared to the baseline method.

Keywords: deep neural networks; noisy labels; semi-supervised learning; image classification (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/15/2389/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/15/2389/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:15:p:2389-:d:1447195

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:15:p:2389-:d:1447195