Tailored knowledge distillation with automated loss function learning
Sheng Ran,
Tao Huang and
Wuyue Yang
PLOS ONE, 2025, vol. 20, issue 6, 1-16
Abstract:
Knowledge Distillation (KD) is one of the most effective and widely used methods for model compression of large models. It has achieved significant success with the meticulous development of distillation losses. However, most state-of-the-art KD losses are manually crafted and task-specific, raising questions about their contribution to distillation efficacy. This paper unveils Learnable Knowledge Distillation (LKD), a novel approach that autonomously learns adaptive, performance-driven distillation losses. LKD revolutionizes KD by employing a bi-level optimization strategy and an iterative optimization that differentiably learns distillation losses aligned with the students’ validation loss. Building upon our proposed generic loss networks for logits and intermediate features, we derive a dynamic optimization strategy to adjust losses based on the student models’ changing states for enhanced performance and adaptability. Additionally, for a more robust loss, we introduce a uniform sampling of diverse previously-trained student models to train the loss with various convergence rates of predictions. With the more universally adaptable distillation framework of LKD, we conduct experiments on various datasets such as CIFAR and ImageNet, demonstrating our superior performance without the need for task-specific adjustments. For example, our LKD achieves 73.62% accuracy with the MobileNet model on ImageNet, significantly surpassing our KD baseline by 2.94%.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0325599 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 25599&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0325599
DOI: 10.1371/journal.pone.0325599
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().