EconPapers    
Economics at your fingertips  
 

Robust deep learning object recognition models rely on low frequency information in natural images

Zhe Li, Josue Ortega Caro, Evgenia Rusak, Wieland Brendel, Matthias Bethge, Fabio Anselmi, Ankit B Patel, Andreas S Tolias and Xaq Pitkow

PLOS Computational Biology, 2023, vol. 19, issue 3, 1-15

Abstract: Machine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.Author summary: Though artificial intelligence has achieved high performance on various vision tasks, its ability to generalize to out-of-distribution data is limited. Most remarkably, machine learning models are extremely sensitive to input perturbations such as adversarial attacks and common corruptions. Previous studies have observed that imposing an inductive bias towards brain-like representations can improve the robustness of models, but the reasons underlying this benefit were left unknown. In this work, we propose and test the hypothesis that the robustness of brain-like models can be accounted for by a low frequency feature preference inherited from the neural representation. We designed a novel machine learning task to probe the frequency bias of different models and observed a strong correlation between that and model robustness. We believe this work serves as a first step towards understanding why biological visual systems generalize well to out-of-distribution data and provides an explanation for the robustness of state-of-the-art machine learning models trained with various methods. It also opens the door to applying computational principles of the brain in artificial intelligence, hence helping to overcome the fundamental difficulties faced by current AI methods.

Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010932 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 10932&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1010932

DOI: 10.1371/journal.pcbi.1010932

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-05-03
Handle: RePEc:plo:pcbi00:1010932