Distributed Collaborative Learning with Representative Knowledge Sharing
Joseph Casey,
Qianjiao Chen,
Mengchen Fan,
Baocheng Geng,
Roman Shterenberg,
Zhong Chen and
Keren Li ()
Additional contact information
Joseph Casey: Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Qianjiao Chen: Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Mengchen Fan: Department of Computer Science, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Baocheng Geng: Department of Computer Science, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Roman Shterenberg: Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Zhong Chen: School of Computing, Southern Illinois University, Carbondale, IL 62901, USA
Keren Li: Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Mathematics, 2025, vol. 13, issue 6, 1-20
Abstract:
Distributed Collaborative Learning (DCL) addresses critical challenges in privacy-aware machine learning by enabling indirect knowledge transfer across nodes with heterogeneous feature distributions. Unlike conventional federated learning approaches, DCL assumes non-IID data and prediction task distributions that span beyond local training data, requiring selective collaboration to achieve generalization. In this work, we propose a novel collaborative transfer learning (CTL) framework that utilizes representative datasets and adaptive distillation weights to facilitate efficient and privacy-preserving collaboration. By leveraging Energy Coefficients to quantify node similarity, CTL dynamically selects optimal collaborators and refines local models through knowledge distillation on shared representative datasets. Simulations demonstrate the efficacy of CTL in improving prediction accuracy across diverse tasks while balancing trade-offs between local and global performance. Furthermore, we explore the impact of data spread and dispersion on collaboration, highlighting the importance of tailored node alignment. This framework provides a scalable foundation for cross-domain generalization in distributed machine learning.
Keywords: collaborative transfer learning; knowledge distillation; contrastive learning; federated learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/6/1004/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/6/1004/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:6:p:1004-:d:1615998
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().