EconPapers    
Economics at your fingertips  
 

Attention-based advantage actor-critic algorithm with prioritized experience replay for complex 2-D robotic motion planning

Chengmin Zhou, Bingding Huang (), Haseeb Hassan and Pasi Fränti ()
Additional contact information
Chengmin Zhou: University of Eastern Finland
Bingding Huang: Shenzhen Technology University
Haseeb Hassan: Shenzhen Technology University
Pasi Fränti: University of Eastern Finland

Journal of Intelligent Manufacturing, 2023, vol. 34, issue 1, No 7, 180 pages

Abstract: Abstract Robotic motion planning in dense and dynamic indoor scenarios constantly challenges the researchers because of the motion unpredictability of obstacles. Recent progress in reinforcement learning enables robots to better cope with the dense and unpredictable obstacles by encoding complex features of the robot and obstacles into the encoders like the long-short term memory (LSTM). Then these features are learned by the robot using reinforcement learning algorithms, such as the deep Q network and asynchronous advantage actor critic algorithm. However, existing methods depend heavily on expert experiences to enhance the convergence speed of the networks by initializing them via imitation learning. Moreover, those approaches based on LSTM to encode the obstacle features are not always efficient and robust enough, therefore sometimes causing the network overfitting in training. This paper focuses on the advantage actor critic algorithm and introduces an attention-based actor critic algorithm with experience replay algorithm to improve the performance of existing algorithm from two perspectives. First, LSTM encoder is replaced by a robust encoder attention weight to better interpret the complex features of the robot and obstacles. Second, the robot learns from its past prioritized experiences to initialize the networks of the advantage actor-critic algorithm. This is achieved by applying the prioritized experience replay method, which makes the best of past useful experiences to improve the convergence speed. As results, the network based on our algorithm takes only around 15% and 30% experiences to get rid of the early-stage training without the expert experiences in cases with five and ten obstacles, respectively. Then it converges faster to a better reward with less experiences (near 45% and 65% of experiences in cases with ten and five obstacles respectively) when comparing with the baseline LSTM-based advantage actor critic algorithm. Our source code is freely available at the GitHub ( https://github.com/CHUENGMINCHOU/AW-PER-A2C ).

Keywords: Motion planning; Path planning; Reinforcement learning; Intelligent robot; Deep learning (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10845-022-01988-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:joinma:v:34:y:2023:i:1:d:10.1007_s10845-022-01988-z

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10845

DOI: 10.1007/s10845-022-01988-z

Access Statistics for this article

Journal of Intelligent Manufacturing is currently edited by Andrew Kusiak

More articles in Journal of Intelligent Manufacturing from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:joinma:v:34:y:2023:i:1:d:10.1007_s10845-022-01988-z