EconPapers    
Economics at your fingertips  
 

ERA-MADDPG: An Elastic Routing Algorithm Based on Multi-Agent Deep Deterministic Policy Gradient in SDN

Wanwei Huang, Hongchang Liu, Yingying Li () and Linlin Ma
Additional contact information
Wanwei Huang: College of Software Engineering, Zhengzhou University of Light Industry, Zhengzhou 450007, China
Hongchang Liu: College of Software Engineering, Zhengzhou University of Light Industry, Zhengzhou 450007, China
Yingying Li: College of Electronics & Communication Engineering, Shenzhen Polytechnic University, Shenzhen 518005, China
Linlin Ma: College of Information Technology, Zhengzhou Vocational College of Finance and Taxation, Zhengzhou 450048, China

Future Internet, 2025, vol. 17, issue 7, 1-20

Abstract: To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in deep reinforcement learning. The algorithm first builds a three-layer architecture based on Software-Defined Networking (SDN). The top-down layers are the multi-agent layer, the controller layer, and the data layer. The architecture’s processing flow, including real-time data layer information collection and dynamic policy generation, enables the ERA-MADDPG algorithm to exhibit strong elasticity by quickly adjusting routing decisions in response to topology changes. The actor-critic framework combined with Convolutional Neural Networks (CNN) to implement the ERA-MADDPG routing algorithm effectively improves training efficiency, enhances learning stability, facilitates collaboration, and improves algorithm generalization and applicability. Finally, simulation experiments demonstrate that the convergence speed of the ERA-MADDPG routing algorithm outperforms that of the Multi-Agent Deep Q-Network (MADQN) algorithm and the Smart Routing based on Deep Reinforcement Learning (SR-DRL) algorithm, and the training speed in the initial phase is improved by approximately 20.9% and 39.1% compared to the MADQN algorithm and SR-DRL algorithm, respectively. The elasticity performance of ERA-MADDPG is quantified by re-convergence speed: under 5–15% topology node/link changes, its re-convergence speed is over 25% faster than that of MADQN and SR-DRL, demonstrating superior capability to maintain routing efficiency in dynamic environments.

Keywords: DDPG; multi-agent; network topology; routing algorithm; SDN; actor-critic; DRL (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/17/7/291/pdf (application/pdf)
https://www.mdpi.com/1999-5903/17/7/291/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:17:y:2025:i:7:p:291-:d:1690451

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-06-30
Handle: RePEc:gam:jftint:v:17:y:2025:i:7:p:291-:d:1690451