EconPapers    
Economics at your fingertips  
 

A dynamic clustering technique based on deep reinforcement learning for Internet of vehicles

Abida Sharif (), Jian Ping Li (), Muhammad Asim Saleem, Gunasekaran Manogran, Seifedine Kadry (), Abdul Basit and Muhammad Attique Khan ()
Additional contact information
Abida Sharif: University of Electronic Science and Technology
Jian Ping Li: University of Electronic Science and Technology
Muhammad Asim Saleem: University of Electronic Science and Technology
Gunasekaran Manogran: University of California
Seifedine Kadry: Beirut Arab University
Abdul Basit: University of Engineering and Technology
Muhammad Attique Khan: HITEC University Taxila

Journal of Intelligent Manufacturing, 2021, vol. 32, issue 3, No 8, 757-768

Abstract: Abstract The Internet of Vehicles (IoV) is a communication paradigm that connects the vehicles to the Internet for transferring information between the networks. One of the key challenges in IoV is the management of a massive amount of traffic generated from a large number of connected IoT-based vehicles. Network clustering strategies have been proposed to solve the challenges of traffic management in IoV networks. Traditional optimization approaches have been proposed to manage the resources of the network efficiently. However, the nature of next-generation IoV environment is highly dynamic, and the existing optimization technique cannot precisely formulate the dynamic characteristic of IoV networks. Reinforcement learning is a model-free technique where an agent learns from its environment for learning the optimal policies. We propose an experience-driven approach based on an Actor-Critic based Deep Reinforcement learning framework (AC-DRL) for efficiently selecting the cluster head (CH) for managing the resources of the network considering the noisy nature of IoV environment. The agent in the proposed AC-DRL can efficiently approximate and learn the state-action value function of the actor and action function of the critic for selecting the CH considering the dynamic condition of the network.The experimental results show an improvement of 28% and 15% respectively, in terms of satisfying the SLA requirement and 35% and 14% improvement in throughput compared to the static and DQN approaches.

Keywords: Deep reinforcement learning; Internet of vehicles; Clustering; Reinforcement learning; Optimization (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://link.springer.com/10.1007/s10845-020-01722-7 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:joinma:v:32:y:2021:i:3:d:10.1007_s10845-020-01722-7

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10845

DOI: 10.1007/s10845-020-01722-7

Access Statistics for this article

Journal of Intelligent Manufacturing is currently edited by Andrew Kusiak

More articles in Journal of Intelligent Manufacturing from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:joinma:v:32:y:2021:i:3:d:10.1007_s10845-020-01722-7