EconPapers    
Economics at your fingertips  
 

Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications

Sapdo Utomo, Adarsh Rouniyar, Hsiu-Chun Hsu and Pao-Ann Hsiung ()
Additional contact information
Sapdo Utomo: Graduate Institute of Ambient Intelligence and Smart Systems, National Chung Cheng University, Chiayi 621301, Taiwan
Adarsh Rouniyar: Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621301, Taiwan
Hsiu-Chun Hsu: Department of Information Management, National Chung Cheng University, Chiayi 621301, Taiwan
Pao-Ann Hsiung: Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621301, Taiwan

Future Internet, 2023, vol. 15, issue 11, 1-25

Abstract: Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to adversarial attacks, similar to other AI models. In this paper, we propose federated adversarial training (FAT) strategies to generate robust global models that are resistant to adversarial attacks. We apply two adversarial attack methods, projected gradient descent (PGD) and the fast gradient sign method (FGSM), to our air pollution dataset to generate adversarial samples. We then evaluate the effectiveness of our FAT strategies in defending against these attacks. Our experiments show that FGSM-based adversarial attacks have a negligible impact on the accuracy of global models, while PGD-based attacks are more effective. However, we also show that our FAT strategies can make global models robust enough to withstand even PGD-based attacks. For example, the accuracy of our FAT-PGD and FL-mixed-PGD models is 81.13% and 82.60%, respectively, compared to 91.34% for the baseline FL model. This represents a reduction in accuracy of 10%, but this could be potentially mitigated by using a more complex and larger model. Our results demonstrate that FAT can enhance the security and privacy of sustainable smart city applications. We also show that it is possible to train robust global models from modest datasets per client, which challenges the conventional wisdom that adversarial training requires massive datasets.

Keywords: sustainable smart cities; federated learning; adversarial attack; privacy protection; robust model (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/15/11/371/pdf (application/pdf)
https://www.mdpi.com/1999-5903/15/11/371/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:15:y:2023:i:11:p:371-:d:1283795

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:15:y:2023:i:11:p:371-:d:1283795