EconPapers    
Economics at your fingertips  
 

Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses

Hemant Rathore (), Adithya Samavedhi (), Sanjay K. Sahay () and Mohit Sewak ()
Additional contact information
Hemant Rathore: BITS Pilani, Department of CS & IS, Goa Campus
Adithya Samavedhi: BITS Pilani, Department of CS & IS, Goa Campus
Sanjay K. Sahay: BITS Pilani, Department of CS & IS, Goa Campus
Mohit Sewak: Security, Compliance Research, Microsoft R & D

Information Systems Frontiers, 2023, vol. 25, issue 2, No 9, 567-587

Abstract: Abstract The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection models might be vulnerable to adversarial attacks. This work investigates the adversarial robustness of twenty-four diverse malware detection models developed using two features and twelve learning algorithms across four categories (machine learning, bagging classifiers, boosting classifiers, and neural network). We stepped into the adversary’s shoes and proposed two false-negative evasion attacks, namely GradAA and GreedAA, to expose vulnerabilities in the above detection models. The evasion attack agents transform malware applications into adversarial malware applications by adding minimum noise (maximum five perturbations) while maintaining the modified applications’ structural, syntactic, and behavioral integrity. These adversarial malware applications force misclassifications and are predicted as benign by the detection models. The evasion attacks achieved an average fooling rate of 83.34% (GradAA) and 99.21% (GreedAA) which reduced the average accuracy from 90.35% to 55.22% (GradAA) and 48.29% (GreedAA) in twenty-four detection models. We also proposed two defense strategies, namely Adversarial Retraining and Correlation Distillation Retraining as countermeasures to protect detection models from adversarial attacks. The defense strategies slightly improved the detection accuracy but drastically enhanced the adversarial robustness of detection models. Finally, investigating the robustness of malware detection models against adversarial attacks is an essential step before their real-world deployment and can help in developing adversarially superior detection models.

Keywords: Adversarial Robustness; Malware Detection; Machine Learning; Static Analysis (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://link.springer.com/10.1007/s10796-022-10331-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:infosf:v:25:y:2023:i:2:d:10.1007_s10796-022-10331-z

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10796

DOI: 10.1007/s10796-022-10331-z

Access Statistics for this article

Information Systems Frontiers is currently edited by Ram Ramesh and Raghav Rao

More articles in Information Systems Frontiers from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:infosf:v:25:y:2023:i:2:d:10.1007_s10796-022-10331-z