EconPapers    
Economics at your fingertips  
 

Deep Reinforcement Learning for UAV-Based SDWSN Data Collection

Pejman A. Karegar, Duaa Zuhair Al-Hamid and Peter Han Joo Chong ()
Additional contact information
Pejman A. Karegar: Department of Electrical and Electronic Engineering, Auckland University of Technology (AUT), Auckland 1010, New Zealand
Duaa Zuhair Al-Hamid: Department of Electrical and Electronic Engineering, Auckland University of Technology (AUT), Auckland 1010, New Zealand
Peter Han Joo Chong: Department of Electrical and Electronic Engineering, Auckland University of Technology (AUT), Auckland 1010, New Zealand

Future Internet, 2024, vol. 16, issue 11, 1-14

Abstract: Recent advancements in Unmanned Aerial Vehicle (UAV) technology have made them effective platforms for data capture in applications like environmental monitoring. UAVs, acting as mobile data ferries, can significantly improve ground network performance by involving ground network representatives in data collection. These representatives communicate opportunistically with accessible UAVs. Emerging technologies such as Software Defined Wireless Sensor Networks (SDWSN), wherein the role/function of sensor nodes is defined via software, can offer a flexible operation for UAV data-gathering approaches. In this paper, we introduce the “UAV Fuzzy Travel Path”, a novel approach that utilizes Deep Reinforcement Learning (DRL) algorithms, which is a subfield of machine learning, for optimal UAV trajectory planning. The approach also involves the integration between UAV and SDWSN wherein nodes acting as gateways (GWs) receive data from the flexibly formulated group members via software definition. A UAV is then dispatched to capture data from GWs along a planned trajectory within a fuzzy span. Our dual objectives are to minimize the total energy consumption of the UAV system during each data collection round and to enhance the communication bit rate on the UAV-Ground connectivity. We formulate this problem as a constrained combinatorial optimization problem, jointly planning the UAV path with improved communication performance. To tackle the NP-hard nature of this problem, we propose a novel DRL technique based on Deep Q-Learning. By learning from UAV path policy experiences, our approach efficiently reduces energy consumption while maximizing packet delivery.

Keywords: unmanned aerial vehicle (UAV); software-defined wireless sensor networks (SDWSN); fuzzy UAV route; deep reinforcement learning (DRL) (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/16/11/398/pdf (application/pdf)
https://www.mdpi.com/1999-5903/16/11/398/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:16:y:2024:i:11:p:398-:d:1510133

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:16:y:2024:i:11:p:398-:d:1510133