EconPapers    
Economics at your fingertips  
 

Null Space Properties of Neural Networks with Applications to Image Steganography

Xiang Li () and Kevin M. Short ()
Additional contact information
Xiang Li: Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA
Kevin M. Short: Integrated Applied Mathematics Program, Department of Mathematics and Statistics, University of New Hampshire, Durham, NH 03824, USA

Mathematics, 2025, vol. 13, issue 21, 1-21

Abstract: This paper advances beyond adversarial neural network methods by considering whether the underlying mathematics of neural networks contains inherent properties that can be exploited to fool neural networks. In broad terms, this paper will consider a neural network to be composed of a series of linear transformations between layers of the network, interspersed with nonlinear stages that serve to compress outliers. The input layer of the network is typically extremely high-dimensional, yet the final classification is in a space of a much lower dimension. This dimensional reduction leads to the existence of a null space, and this paper will explore how that can be exploited. Specifically, this paper explores the null space properties of neural networks by extending the null space definition from linear to nonlinear maps and discussing the presence of a null space in neural networks. The null space of a neural network characterizes the component of input data that makes no contribution to the final prediction so that we can exploit it to trick the neural network. One application described here leads to a method of image steganography. Through experiments on image data sets such as MNIST, it has been shown that the null space components can be used to force the neural network to choose a selected hidden image class, even though the overall image can be made to look like a completely different image. The paper concludes with comparisons between what a human viewer would see and the part of the image that the neural network is actually using to make predictions, hence showing that what the neural network “sees” is completely different than what we would expect.

Keywords: neural networks; null space; image steganography (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/21/3394/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/21/3394/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:21:p:3394-:d:1778975

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-29
Handle: RePEc:gam:jmathe:v:13:y:2025:i:21:p:3394-:d:1778975