FedUB: Federated Learning Algorithm Based on Update Bias
Hesheng Zhang,
Ping Zhang (),
Mingkai Hu,
Muhua Liu and
Jiechang Wang
Additional contact information
Hesheng Zhang: School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
Ping Zhang: School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
Mingkai Hu: School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
Muhua Liu: School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
Jiechang Wang: Sports Big Data Center, Department of Physical Education, Zhengzhou University, Zhengzhou 450001, China
Mathematics, 2024, vol. 12, issue 10, 1-26
Abstract:
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature of data across different clients. non-iid data can lead to inconsistencies between the minimal loss experienced by individual clients and the global loss observed after the central server aggregates the local models, affecting the model’s convergence speed and generalization capability. To address this challenge, we propose a novel federated learning algorithm based on update bias (FedUB). Unlike traditional federated learning approaches such as FedAvg and FedProx, which independently update model parameters on each client before direct aggregation to form a global model, the FedUB algorithm incorporates an update bias in the loss function of local models—specifically, the difference between each round’s local model updates and the global model updates. This design aims to reduce discrepancies between local and global updates, thus aligning the parameters of locally updated models more closely with those of the globally aggregated model, thereby mitigating the fundamental conflict between local and global optima. Additionally, during the aggregation phase at the server side, we introduce a metric called the bias metric, which assesses the similarity between each client’s local model and the global model. This metric adaptively sets the weight of each client during aggregation after each training round to achieve a better global model. Extensive experiments conducted on multiple datasets have confirmed the effectiveness of the FedUB algorithm. The results indicate that FedUB generally outperforms methods such as FedDC, FedDyn, and Scaffold, especially in scenarios involving partial client participation and non-iid data distributions. It demonstrates superior performance and faster convergence in tasks such as image classification.
Keywords: federated learning; update bias; adaptive weights; data heterogeneity; loss function; secure aggregation (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/10/1601/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/10/1601/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:10:p:1601-:d:1398118
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().