SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography
Mohammad Zubair Khan (),
Waseem Abbass,
Nasim Abbas,
Muhammad Awais Javed,
Abdulrahman Alahmadi and
Uzma Majeed
Additional contact information
Mohammad Zubair Khan: Department of Computer Science and Information, Taibah University Madinah, Madinah 42353, Saudi Arabia
Waseem Abbass: Department of Electrical and Computer Engineering, Capital University of Science and Technology (CUST), Islamabad 45750, Pakistan
Nasim Abbas: Department of Computer Science, Muslim Youth University, Islamabad 45750, Pakistan
Muhammad Awais Javed: Department of Electrical and Computer Engineering, COMSATS University, Islamabad 45550, Pakistan
Abdulrahman Alahmadi: Department of Computer Science and Information, Taibah University Madinah, Madinah 42353, Saudi Arabia
Uzma Majeed: Department of Computer Science and Automation, Technische Universität Ilmenau, 98693 Ilmenau, Germany
Mathematics, 2025, vol. 13, issue 18, 1-49
Abstract:
Federated learning (FL) offers a promising paradigm for distributed model training in Internet of Medical Things (IoMT) systems, where patient data privacy and device heterogeneity are critical concerns. However, conventional FL remains vulnerable to gradient leakage, model poisoning, and adversarial inference, particularly in privacy-sensitive and resource-constrained medical environments. To address these challenges, we propose SAFE-MED, a secure and adversarially robust framework for privacy-preserving FL tailored for IoMT deployments. SAFE-MED integrates neural encryption, adversarial co-training, anomaly-aware gradient filtering, and trust-weighted aggregation into a unified learning pipeline. The encryption and decryption components are jointly optimized with a simulated adversary under a minimax objective, ensuring high reconstruction fidelity while suppressing inference risk. To enhance robustness, the system dynamically adjusts client influence based on behavioral trust metrics and detects malicious updates using entropy-based anomaly scores. Comprehensive experiments are conducted on three representative medical datasets: Cleveland Heart Disease (tabular), MIT-BIH Arrhythmia (ECG time series), and PhysioNet Respiratory Signals. SAFE-MED achieves near-baseline accuracy with less than 2% degradation, while reducing gradient leakage by up to 85% compared to vanilla FedAvg and over 66% compared to recent neural cryptographic FL baselines. The framework maintains over 90% model accuracy under 20% poisoning attacks and reduces communication cost by 42% relative to homomorphic encryption-based methods. SAFE-MED demonstrates strong scalability, reliable convergence, and practical runtime efficiency across heterogeneous network conditions. These findings validate its potential as a secure, efficient, and deployable FL solution for next-generation medical AI applications.
Keywords: federated learning; Internet of Medical Things (IoMT); privacy preservation; neural cryptography; adversarial robustness; trust-based aggregation (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/18/2954/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/18/2954/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:18:p:2954-:d:1748145
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().