ACC-RL: Adaptive Congestion Control Based on Reinforcement Learning in Power Distribution Networks with Data Centers
Tairan Huang,
Xiaojuan Lu,
Dian Zhang,
Haoran Cheng,
Pingping Dong () and
Lianming Zhang ()
Additional contact information
Tairan Huang: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Xiaojuan Lu: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Dian Zhang: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Haoran Cheng: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Pingping Dong: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Lianming Zhang: College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Energies, 2023, vol. 16, issue 14, 1-14
Abstract:
Modern data center power distribution networks place greater demands on the stability and reliability of power supply. Growing network computing demands and complex network environments can cause network congestion, which in turn leads to network traffic overload and power supply equipment overload. Therefore, network congestion is one of the most important problems faced by data center power distribution networks. In this paper, we propose an approach called ACC-RL based on reinforcement learning (RL), which can effectively avoid network congestion and improve energy performance. ACC-RL models the congestion control task as a Partially Observable Markov Decision Process (POMDP). It is independent of the estimated value function and supports deterministic policies. It also sets the reward value function using real-time network information such as the transmission rate, RTT, and switch queue length, with the target transmission rate as the target equilibrium point. ACC-RL is highly general, can be trained on datasets running in different network environments, and generates a robust congestion control policy. The experimental results show that ACC-RL can solve the congestion problem without any predefined scenarios in different network environments. It can control the network traffic well, thus ensuring the stability and reliability of the power supply in the distribution network. We conduct network simulation experiments through NS-3. We set up different scenarios for experiments and data analysis in many-to-one, all-to-all, and long–short network environments. Compared with the popular rule-based congestion control algorithms such as TIMELY, DCQCN, and HPCC, ACC-RL shows different degrees of energy performance advantages in network metrics such as fairness, link utilization, and throughput.
Keywords: power distribution network; data centers; congestion control; reinforcement learning; energy performance (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/16/14/5385/pdf (application/pdf)
https://www.mdpi.com/1996-1073/16/14/5385/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:16:y:2023:i:14:p:5385-:d:1194389
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().