EconPapers    
Economics at your fingertips  
 

Enhance Domain-Invariant Transferability of Adversarial Examples via Distance Metric Attack

Jin Zhang, Wenyu Peng, Ruxin Wang, Yu Lin, Wei Zhou and Ge Lan
Additional contact information
Jin Zhang: Kunming Institute of Physics, Kunming 650223, China
Wenyu Peng: School of Software, Yunnan University, Kunming 650500, China
Ruxin Wang: School of Software, Yunnan University, Kunming 650500, China
Yu Lin: Kunming Institute of Physics, Kunming 650223, China
Wei Zhou: School of Software, Yunnan University, Kunming 650500, China
Ge Lan: Kunming Institute of Physics, Kunming 650223, China

Mathematics, 2022, vol. 10, issue 8, 1-15

Abstract: A general foundation of fooling a neural network without knowing the details (i.e., black-box attack) is the attack transferability of adversarial examples across different models. Many works have been devoted to enhancing the task-specific transferability of adversarial examples, whereas the cross-task transferability is nearly out of the research scope. In this paper, to enhance the above two types of transferability of adversarial examples, we are the first to regard the transferability issue as a heterogeneous domain generalisation problem, which can be addressed by a general pipeline based on the domain-invariant feature extractor pre-trained on ImageNet. Specifically, we propose a distance metric attack (DMA) method that aims to increase the latent layer distance between the adversarial example and the benign example along the opposite direction guided by the cross-entropy loss. With the help of a simple loss, DMA can effectively enhance the domain-invariant transferability (for both the task-specific case and the cross-task case) of the adversarial examples. Additionally, DMA can be used to measure the robustness of the latent layers in a deep model. We empirically find that the models with similar structures have consistent robustness at depth-similar layers, which reveals that model robustness is closely related to model structure. Extensive experiments on image classification, object detection, and semantic segmentation demonstrate that DMA can improve the success rate of black-box attack by more than 10% on the task-specific attack and by more than 5% on cross-task attack.

Keywords: deep learning; distance metric; adversarial attack; cross-task; transferability (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/8/1249/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/8/1249/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:8:p:1249-:d:791122

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:8:p:1249-:d:791122