EconPapers    
Economics at your fingertips  
 

Mitigating Membership Inference Attacks via Generative Denoising Mechanisms

Zhijie Yang, Xiaolong Yan (), Guoguang Chen and Xiaoli Tian
Additional contact information
Zhijie Yang: College of Mechatronic Engineering, North University of China, No. 3 Xueyuan Road, Taiyuan 030051, China
Xiaolong Yan: College of Mechatronic Engineering, North University of China, No. 3 Xueyuan Road, Taiyuan 030051, China
Guoguang Chen: College of Mechatronic Engineering, North University of China, No. 3 Xueyuan Road, Taiyuan 030051, China
Xiaoli Tian: College of Mechatronic Engineering, North University of China, No. 3 Xueyuan Road, Taiyuan 030051, China

Mathematics, 2025, vol. 13, issue 19, 1-25

Abstract: Membership Inference Attacks (MIAs) pose a significant threat to privacy in modern machine learning systems, enabling adversaries to determine whether a specific data record was used during model training. Existing defense techniques often degrade model utility or rely on heuristic noise injection, which fails to provide a robust, mathematically grounded defense. In this paper, we propose Diffusion-Driven Data Preprocessing (D 3 P), a novel privacy-preserving framework leveraging generative diffusion models to transform sensitive training data before learning, thereby reducing the susceptibility of trained models to MIAs. Our method integrates a mathematically rigorous denoising process into a privacy-oriented diffusion pipeline, which ensures that the reconstructed data maintains essential semantic features for model utility while obfuscating fine-grained patterns that MIAs exploit. We further introduce a privacy–utility optimization strategy grounded in formal probabilistic analysis, enabling adaptive control of the diffusion noise schedule to balance attack resilience and predictive performance. Experimental evaluations across multiple datasets and architectures demonstrate that D 3 P significantly reduces MIA success rates by up to 42.3 % compared to state-of-the-art defenses, with a less than 2.5 % loss in accuracy. This work provides a theoretically principled and empirically validated pathway for integrating diffusion-based generative mechanisms into privacy-preserving AI pipelines, which is particularly suitable for deployment in cloud-based and blockchain-enabled machine learning environments.

Keywords: machine learning; deep learning; privacy protection; diffusion model (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/19/3070/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/19/3070/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:19:p:3070-:d:1756967

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-09-26
Handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3070-:d:1756967