A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks
Lingling Chen (),
Ziwei Wang (),
Xiaohui Zhao (),
Xuan Shen () and
Wei He ()
Additional contact information
Lingling Chen: Jilin Institute of Chemical Technology
Ziwei Wang: Jilin Institute of Chemical Technology
Xiaohui Zhao: Jilin University
Xuan Shen: Jilin Institute of Chemical Technology
Wei He: Jilin Institute of Chemical Technology
Telecommunication Systems: Modelling, Analysis, Design and Management, 2024, vol. 87, issue 2, No 6, 359-383
Abstract:
Abstract As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98 $$\%$$ % , which is improved by 18 $$\%$$ % compared with the Q-learning algorithm. And the convergence speed is 62.5 $$\%$$ % faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50 $$\%$$ % and 33 $$\%$$ % compared with the DQN algorithm, and improved by 60 $$\%$$ % and 140 $$\%$$ % compared with the Q-learning algorithm.
Keywords: Spectrum access; Cognitive vehicular networks; Deep reinforcement learning (DRL); Quality of service (QoS) (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s11235-024-01188-5 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:telsys:v:87:y:2024:i:2:d:10.1007_s11235-024-01188-5
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/11235
DOI: 10.1007/s11235-024-01188-5
Access Statistics for this article
Telecommunication Systems: Modelling, Analysis, Design and Management is currently edited by Muhammad Khan
More articles in Telecommunication Systems: Modelling, Analysis, Design and Management from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().