EconPapers    
Economics at your fingertips  
 

CVFL: A Chain-like and Verifiable Federated Learning Scheme with Computational Efficiency Based on Lagrange Interpolation Functions

Mengnan Wang, Chunjie Cao (), Xiangyu Wang (), Qi Zhang, Zhaoxing Jing, Haochen Li and Jingzhang Sun ()
Additional contact information
Mengnan Wang: School of Computer Science and Technology, Hainan University, Haikou 570228, China
Chunjie Cao: School of Cryptology, Hainan University, Haikou 570228, China
Xiangyu Wang: School of Network and Information Security, Xidian University, Xi’an 710126, China
Qi Zhang: Faculty of Data Science, City University of Macau, Macau SAR, China
Zhaoxing Jing: School of Cryptology, Hainan University, Haikou 570228, China
Haochen Li: School of Cryptology, Hainan University, Haikou 570228, China
Jingzhang Sun: School of Cryptology, Hainan University, Haikou 570228, China

Mathematics, 2023, vol. 11, issue 21, 1-20

Abstract: Data privacy and security concerns have attracted significant attention, leading to the frequent occurrence of data silos in deep learning. To address this issue, federated learning (FL) has emerged. However, simple federated learning frameworks still face two security risks during the training process. Firstly, sharing local gradients instead of private datasets among users does not completely eliminate the possibility of data leakage. Secondly, malicious servers could obtain inaccurate aggregation parameters by forging or simplifying the aggregation process, ultimately leading to model training failures. To address these issues and achieve high-performance training models, we have designed a verifiable federated learning scheme called CVFL, where users exist in a serial manner to resist inference attacks and further protect the privacy of user dataset information through serial encryption. We ensure the secure aggregation of models through a verification protocol based on Lagrange interpolation functions. The serial transmission of local gradients effectively reduces the communication burden on cloud servers, and our verification protocol avoids the computational overhead caused by a large number of encryption and decryption operations without sacrificing model accuracy. Experimental results on the MNIST dataset demonstrate that, after 10 epochs of training with 100 users, our solution achieves a model accuracy of 90.63% for MLP architecture under IID data distribution and 87.47% under non-IID data distribution. For CNN architecture, our solution achieves a model accuracy of 96.72% under IID data distribution and 93.53% under non-IID data distribution. Experimental evaluations corroborate the practical performance of the presented scheme with high accuracy and efficiency.

Keywords: deep learning; federated learning; privacy protection; verifiable aggregation (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/21/4547/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/21/4547/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:21:p:4547-:d:1274113

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:21:p:4547-:d:1274113