A Secure and Fair Federated Learning Framework Based on Consensus Incentive Mechanism
Feng Zhu,
Feng Hu (),
Yanchao Zhao,
Bing Chen and
Xiaoyang Tan
Additional contact information
Feng Zhu: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Feng Hu: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Yanchao Zhao: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Bing Chen: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Xiaoyang Tan: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Mathematics, 2024, vol. 12, issue 19, 1-19
Abstract:
Federated learning facilitates collaborative computation among multiple participants while safeguarding user privacy. However, current federated learning algorithms operate under the assumption that all participants are trustworthy and their systems are secure. Nonetheless, real-world scenarios present several challenges: (1) Malicious clients disrupt federated learning through model poisoning and data poisoning attacks. Although some research has proposed secure aggregation methods to address this issue, many methods have inherent limitations. (2) Clients may refuse or passively participate in the training process due to considerations of self-interest, and may even interfere with the training process due to competitive relationships. To overcome these obstacles, we have devised a reliable federated framework aimed at ensuring secure computing throughout the entirety of federated task processes. Initially, we propose a method for detecting malicious models to safeguard the integrity of model aggregation. Furthermore, we have proposed a fair contribution assessment method and awarded the right to write blocks to the creator of the optimal model, ensuring the active participation of participants in both local training and model aggregation. Finally, we establish a computational framework grounded in blockchain and smart contracts to uphold the integrity and fairness of federated tasks. To assess the efficacy of our framework, we conduct simulations involving various types of client attacks and contribution assessment scenarios using multiple open-source datasets. Results from these experiments demonstrate that our framework effectively ensures the credibility of federated tasks while achieving impartial evaluation of client contributions.
Keywords: federated learning; blockchain; malicious model detection; contribution evaluation (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/19/3068/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/19/3068/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:19:p:3068-:d:1489437
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().