EconPapers    
Economics at your fingertips  
 

Deep Reinforcement Learning-Based RMSA Policy Distillation for Elastic Optical Networks

Bixia Tang, Yue-Cai Huang (), Yun Xue and Weixing Zhou
Additional contact information
Bixia Tang: School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China
Yue-Cai Huang: School of Electronics and Information Engineering, South China Normal University, Foshan 528200, China
Yun Xue: School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China
Weixing Zhou: School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China

Mathematics, 2022, vol. 10, issue 18, 1-19

Abstract: The reinforcement learning-based routing, modulation, and spectrum assignment has been regarded as an emerging paradigm for resource allocation in the elastic optical networks. One limitation is that the learning process is highly dependent on the training environment, such as the traffic pattern or the optical network topology. Therefore, re-training is required in case of network topology or traffic pattern variations, which consumes a great amount of computation power and time. To ease the requirement of re-training, we propose a policy distillation scheme, which distills knowledge from a well-trained teacher model and then transfers the knowledge to the to-be-trained student model, so that the training of the latter can be accelerated. Specifically, the teacher model is trained for one training environment (e.g., the topology and traffic pattern) and the student model is for another training environment. The simulation results indicate that our proposed method can effectively speed up the training process of the student model, and it even leads to a lower blocking probability, compared with the case that the student model is trained without knowledge distillation.

Keywords: routing, modulation and spectrum assignment; elastic optical networks; deep reinforcement learning; knowledge distillation (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/18/3293/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/18/3293/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:18:p:3293-:d:912158

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:18:p:3293-:d:912158