Deep Reinforcement Learning-Based Robotic Grasping in Clutter and Occlusion
Marwan Qaid Mohammed,
Lee Chung Kwek,
Shing Chyi Chua,
Abdulaziz Salamah Aljaloud,
Arafat Al-Dhaqm,
Zeyad Ghaleb Al-Mekhlafi and
Badiea Abdulkarem Mohammed
Additional contact information
Marwan Qaid Mohammed: Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia
Lee Chung Kwek: Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia
Shing Chyi Chua: Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia
Abdulaziz Salamah Aljaloud: College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
Arafat Al-Dhaqm: School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia
Zeyad Ghaleb Al-Mekhlafi: College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
Badiea Abdulkarem Mohammed: College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
Sustainability, 2021, vol. 13, issue 24, 1-27
Abstract:
In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.
Keywords: depth difference; multi-view; change observation; synergizing two actions; deep-RL; robotic grasping; cluttered scene (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2021
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2071-1050/13/24/13686/pdf (application/pdf)
https://www.mdpi.com/2071-1050/13/24/13686/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:13:y:2021:i:24:p:13686-:d:699920
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().