EconPapers    
Economics at your fingertips  
 

Communication-Efficient Zeroth-Order Adaptive Optimization for Federated Learning

Ping Xie (), Xiangrui Gao, Fan Li, Ling Xing, Yu Zhang and Hanxiao Sun
Additional contact information
Ping Xie: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
Xiangrui Gao: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
Fan Li: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
Ling Xing: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
Yu Zhang: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
Hanxiao Sun: School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China

Mathematics, 2024, vol. 12, issue 8, 1-21

Abstract: Federated learning has become a prevalent distributed training paradigm, in which local devices collaboratively train learning models without exchanging local data. One of the most dominant frameworks of federated learning (FL) is FedAvg, since it is efficient and simple to implement; here, the first-order information is generally utilized to train the parameters of learning models. In practice, however, the gradient information may be unavailable or infeasible in some applications, such as federated black-box optimization problems. To solve the issue, we propose an innovative zeroth-order adaptive federated learning algorithm without using the gradient information, referred to as ZO-AdaFL, which integrates the zeroth-order optimization algorithm into the adaptive gradient method. Moreover, we also rigorously analyze the convergence behavior of ZO-AdaFL in a non-convex setting, i.e., where ZO-AdaFL achieves convergence to a region close to a stationary point at a speed of O ( 1 / T ) ( T represents the total iteration number). Finally, to verify the performance of ZO-AdaFL, simulation experiments are performed using the MNIST and FMNIST datasets. Our experimental findings demonstrate that ZO-AdaFL outperforms other state-of-the-art zeroth-order FL approaches in terms of both effectiveness and efficiency.

Keywords: black-box optimization; convergence rate; federated learning; gradient information; zeroth-order adaptive algorithm (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/8/1148/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/8/1148/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:8:p:1148-:d:1373834

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:8:p:1148-:d:1373834