FedNolowe: A normalized loss-based weighted aggregation strategy for robust federated learning in heterogeneous environments
Duy-Dong Le,
Tuong-Nguyen Huynh,
Anh-Khoa Tran,
Minh-Son Dao and
Pham The Bao
PLOS ONE, 2025, vol. 20, issue 8, 1-25
Abstract:
Federated Learning supports collaborative model training across distributed clients while keeping sensitive data decentralized. Still, non-independent and identically distributed data pose challenges like unstable convergence and client drift. We propose Federated Normalized Loss-based Weighted Aggregation (FedNolowe) (Code is available at https://github.com/dongld-2020/fednolowe), a new method that weights client contributions using normalized training losses, favoring those with lower losses to improve global model stability. Unlike prior methods tied to dataset sizes or resource-heavy techniques, FedNolowe employs a two-stage L1 normalization, reducing computational complexity by 40% in floating-point operations while matching state-of-the-art performance. A detailed sensitivity analysis shows our two-stage weighting maintains stability in heterogeneous settings by mitigating extreme loss impacts while remaining effective in independent and identically distributed scenarios.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0322766 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 22766&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0322766
DOI: 10.1371/journal.pone.0322766
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().