EconPapers    
Economics at your fingertips  
 

An Adaptive Multi-Level Quantization-Based Reinforcement Learning Model for Enhancing UAV Landing on Moving Targets

Najmaddin Abo Mosali, Syariful Syafiq Shamsudin, Salama A. Mostafa, Omar Alfandi, Rosli Omar, Najib Al-Fadhali, Mazin Abed Mohammed, R. Q. Malik, Mustafa Musa Jaber and Abdu Saif
Additional contact information
Najmaddin Abo Mosali: Research Center for Unmanned Vehicles, Faculty of Mechanical and Manufacturing Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat 86400, Johor, Malaysia
Syariful Syafiq Shamsudin: Research Center for Unmanned Vehicles, Faculty of Mechanical and Manufacturing Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat 86400, Johor, Malaysia
Salama A. Mostafa: Faculty of Computer Science and Information Technology, Universiti Tun Hussin Onn Malaysia, Parit Raja, Batu Pahat 84600, Johor, Malaysia
Omar Alfandi: College of Technological Innovation, Zayed University, Abu Dhabi 4783, United Arab Emirates
Rosli Omar: Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat 86400, Johor, Malaysia
Najib Al-Fadhali: Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat 86400, Johor, Malaysia
Mazin Abed Mohammed: College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Iraq
R. Q. Malik: Department of Medical Instrumentation Techniques Engineering, Al-Mustaqbal University College, Hillah 51001, Iraq
Mustafa Musa Jaber: Department of Medical Instruments Engineering Techniques, Dijlah University College, Baghdad 10021, Iraq
Abdu Saif: Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Selangor, Malaysia

Sustainability, 2022, vol. 14, issue 14, 1-17

Abstract: The autonomous landing of an unmanned aerial vehicle (UAV) on a moving platform is an essential functionality in various UAV-based applications. It can be added to a teleoperation UAV system or part of an autonomous UAV control system. Various robust and predictive control systems based on the traditional control theory are used for operating a UAV. Recently, some attempts were made to land a UAV on a moving target using reinforcement learning (RL). Vision is used as a typical way of sensing and detecting the moving target. Mainly, the related works have deployed a deep-neural network (DNN) for RL, which takes the image as input and provides the optimal navigation action as output. However, the delay of the multi-layer topology of the deep neural network affects the real-time aspect of such control. This paper proposes an adaptive multi-level quantization-based reinforcement learning (AMLQ) model. The AMLQ model quantizes the continuous actions and states to directly incorporate simple Q-learning to resolve the delay issue. This solution makes the training faster and enables simple knowledge representation without needing the DNN. For evaluation, the AMLQ model was compared with state-of-art approaches and was found to be superior in terms of root mean square error (RMSE), which was 8.7052 compared with the proportional–integral–derivative (PID) controller, which achieved an RMSE of 10.0592.

Keywords: unmanned aerial vehicle (UAV); autonomous landing; deep-neural network; reinforcement learning; multi-level quantization; Q-learning (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/14/14/8825/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/14/8825/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:14:p:8825-:d:866301

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:14:y:2022:i:14:p:8825-:d:866301