EconPapers    
Economics at your fingertips  
 

A Dynamic Scene Vision SLAM Method Incorporating Object Detection and Object Characterization

Hongliang Guan, Chengyuan Qian, Tingsong Wu, Xiaoming Hu, Fuzhou Duan () and Xinyi Ye
Additional contact information
Hongliang Guan: Engineering Research Center of Spatial Information Technology, MOE, Capital Normal University, 105 West Third Ring North Road, Haidian District, Beijing 100048, China
Chengyuan Qian: Engineering Research Center of Spatial Information Technology, MOE, Capital Normal University, 105 West Third Ring North Road, Haidian District, Beijing 100048, China
Tingsong Wu: Engineering Research Center of Spatial Information Technology, MOE, Capital Normal University, 105 West Third Ring North Road, Haidian District, Beijing 100048, China
Xiaoming Hu: Beijing Jumper Science Co., Ltd., Beijing 100083, China
Fuzhou Duan: Engineering Research Center of Spatial Information Technology, MOE, Capital Normal University, 105 West Third Ring North Road, Haidian District, Beijing 100048, China
Xinyi Ye: Engineering Research Center of Spatial Information Technology, MOE, Capital Normal University, 105 West Third Ring North Road, Haidian District, Beijing 100048, China

Sustainability, 2023, vol. 15, issue 4, 1-13

Abstract: Simultaneous localization and mapping (SLAM) based on RGB-D cameras has been widely used for robot localization and navigation in unknown environments. Most current SLAM methods are constrained by static environment assumptions and perform poorly in real-world dynamic scenarios. To improve the robustness and performance of SLAM systems in dynamic environments, this paper proposes a new RGB-D SLAM method for indoor dynamic scenes based on object detection. The method presented in this paper improves on the ORB-SLAM3 framework. First, we designed an object detection module based on YOLO v5 and relied on it to improve the tracking module of ORB-SLAM3 and the localization accuracy of ORB-SLAM3 in dynamic environments. The dense point cloud map building module was also included, which excludes dynamic objects from the environment map to create a static environment point cloud map with high readability and reusability. Full comparison experiments with the original ORB-SLAM3 and two representative semantic SLAM methods on the TUM RGB-D dataset show that: the method in this paper can run at 30+fps, the localization accuracy improved to varying degrees compared to ORB-SLAM3 in all four image sequences, and the absolute trajectory accuracy can be improved by up to 91.10%. The localization accuracy of the method in this paper is comparable to that of DS-SLAM, DynaSLAM and the two recent target detection-based SLAM algorithms, but it runs faster. The RGB-D SLAM method proposed in this paper, which combines the most advanced object detection method and visual SLAM framework, outperforms other methods in terms of localization accuracy and map construction in a dynamic indoor environment and has a certain reference value for navigation, localization, and 3D reconstruction.

Keywords: visual SLAM; ORB-SLAM3; object detection; dynamic environments; dense point cloud map (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/15/4/3048/pdf (application/pdf)
https://www.mdpi.com/2071-1050/15/4/3048/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:15:y:2023:i:4:p:3048-:d:1061181

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:15:y:2023:i:4:p:3048-:d:1061181