EconPapers    
Economics at your fingertips  
 

Mitigating Adversarial Attacks in Object Detection through Conditional Diffusion Models

Xudong Ye, Qi Zhang (), Sanshuai Cui, Zuobin Ying, Jingzhang Sun and Xia Du ()
Additional contact information
Xudong Ye: Faculty of Data Science, City University of Macau, Macau SAR, China
Qi Zhang: Faculty of Data Science, City University of Macau, Macau SAR, China
Sanshuai Cui: Faculty of Data Science, City University of Macau, Macau SAR, China
Zuobin Ying: Faculty of Data Science, City University of Macau, Macau SAR, China
Jingzhang Sun: School of Cyberspace Security, Hainan University, Haikou 570228, China
Xia Du: School of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China

Mathematics, 2024, vol. 12, issue 19, 1-18

Abstract: The field of object detection has witnessed significant advancements in recent years, thanks to the remarkable progress in artificial intelligence and deep learning. These breakthroughs have significantly enhanced the accuracy and efficiency of detecting and categorizing objects in digital images. Nonetheless, contemporary object detection technologies have certain limitations, such as their inability to counter white-box attacks, insufficient denoising, suboptimal reconstruction, and gradient confusion. To overcome these hurdles, this study proposes an innovative approach that uses conditional diffusion models to perturb adversarial examples. The process begins with the application of a random chessboard mask to the adversarial example, followed by the addition of a slight noise to fill the masked area during the forward process. The adversarial image is then restored to its original form through a reverse generative process that only considers the masked pixels, not the entire image. Next, we use the complement of the initial mask as the mask for the second stage to reconstruct the image once more. This two-stage masking process allows for the complete removal of global disturbances and aids in image reconstruction. In particular, we employ a conditional diffusion model based on a class-conditional U-Net architecture, with the source image further conditioned through concatenation. Our method outperforms the recently introduced HARP method by 5% and 6.5% in mAP on the COCO2017 and PASCAL VOC datasets, respectively, under non-APT PGD attacks. Comprehensive experimental results confirm that our method can effectively restore adversarial examples, demonstrating its practical utility.

Keywords: object detection; diffusion model; adversarial attack; adversarial purification (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/19/3093/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/19/3093/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:19:p:3093-:d:1491395

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:19:p:3093-:d:1491395