On the Steganographic Capacity of Selected Learning Models
Rishit Agrawal,
Kelvin Jou,
Tanush Obili,
Daksh Parikh,
Samarth Prajapati,
Yash Seth,
Charan Sridhar,
Nathan Zhang and
Mark Stamp ()
Additional contact information
Rishit Agrawal: San Jose State University
Kelvin Jou: San Jose State University
Tanush Obili: San Jose State University
Daksh Parikh: San Jose State University
Samarth Prajapati: San Jose State University
Yash Seth: San Jose State University
Charan Sridhar: San Jose State University
Nathan Zhang: San Jose State University
Mark Stamp: San Jose State University
A chapter in Machine Learning, Deep Learning and AI for Cybersecurity, 2025, pp 457-491 from Springer
Abstract:
Abstract Machine learning and deep learning models are potential vectors for various attack scenarios. For example, previous research has shown that malware can be hidden in deep learning models. Hiding information in a learning model can be viewed as a form of steganography. In this research, we consider the general question of the steganographic capacity of learning models. Specifically, for a wide range of models, we determine the number of low-order bits of the trained parameters that can be overwritten, without adversely affecting model performance. For each model considered, we graph the accuracy as a function of the number of low-order bits that have been overwritten, and for selected models, we also analyze the steganographic capacity of individual layers. The models that we test include classic machine learning techniques, popular general deep learning models, pre-trained transfer learning-based models, and others. In all cases, we find that a majority of the bits of each trained parameter can be overwritten before the accuracy degrades. Of the models tested, the steganographic capacity ranges from 7.04 KB to 44.74 MB. We discuss the implications of our results and consider possible avenues for further research.
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-031-83157-7_16
Ordering information: This item can be ordered from
http://www.springer.com/9783031831577
DOI: 10.1007/978-3-031-83157-7_16
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().