EconPapers    
Economics at your fingertips  
 

Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

Li Wang, Lijun Zhao, Guanglei Huo, Ruifeng Li, Zhenghua Hou, Pan Luo, Zhenye Sun, Ke Wang and Chenguang Yang

Complexity, 2018, vol. 2018, 1-12

Abstract:

In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.

Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://downloads.hindawi.com/journals/8503/2018/1627185.pdf (application/pdf)
http://downloads.hindawi.com/journals/8503/2018/1627185.xml (text/xml)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hin:complx:1627185

DOI: 10.1155/2018/1627185

Access Statistics for this article

More articles in Complexity from Hindawi
Bibliographic data for series maintained by Mohamed Abdelhakeem ().

 
Page updated 2025-03-19
Handle: RePEc:hin:complx:1627185