Improving performance of WSNs in IoT applications by transmission power control and adaptive learning rates in reinforcement learning
Arunita Chaukiyal ()
Additional contact information
Arunita Chaukiyal: University of Delhi
Telecommunication Systems: Modelling, Analysis, Design and Management, 2024, vol. 87, issue 3, No 3, 575-591
Abstract:
Abstract The paper investigates the effect of controlling the transmission power used for communication of data packets at physical layer to prolong longevity of network and adaptive learning rates in a reinforcement-learning algorithm working at network layer for dynamic and quick decision making. A routing protocol is proposed for data communication, which works in tandem with physical layer, to improve performance of Wireless Sensor Networks used in IoT applications. The proposed methodology employs Q-learning, a form of reinforcement learning algorithm at network layer. Here, an agent at each sensor node employs the Q-learning algorithm to decide on an agent which is to be used as packet forwarder and also helps in mitigating energy-hole problem. On the other hand, the transmission power control method saves agents’ battery energy by determining the appropriate power level to be used for packet transmission, and also achieving reduction in overhearing among neighboring agents. An agent derives its learning rate from its environment comprising of its neighboring agents. Each agents determines its own learning rate by using the hop distance to sink, and the residual energy (RE) of neighboring agents. The proposed method uses a higher learning rate at first, which is gradually decreased with the reduction in energy levels of agents over time. The proposed protocol is simulated to work in high-traffic scenarios with multiple source-sink pairs, which is a common feature of IoT applications in the monitoring and surveillance domain. Based on the NS3 simulation results, the proposed strategy significantly improved network performance in comparison with other routing protocols using Q-learning.
Keywords: Reinforcement-learning; Wireless sensor networks; Routing; Transmission power control; Adaptive learning rates (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s11235-024-01191-w Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:telsys:v:87:y:2024:i:3:d:10.1007_s11235-024-01191-w
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/11235
DOI: 10.1007/s11235-024-01191-w
Access Statistics for this article
Telecommunication Systems: Modelling, Analysis, Design and Management is currently edited by Muhammad Khan
More articles in Telecommunication Systems: Modelling, Analysis, Design and Management from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().