The Model and Training Algorithm of Compact Drone Autonomous Visual Navigation System
Viacheslav Moskalenko,
Alona Moskalenko,
Artem Korobov and
Viktor Semashko
Additional contact information
Viacheslav Moskalenko: Department of Computer Science, Sumy State University, 40007 Sumy, Ukraine
Alona Moskalenko: Department of Computer Science, Sumy State University, 40007 Sumy, Ukraine
Artem Korobov: Department of Computer Science, Sumy State University, 40007 Sumy, Ukraine
Viktor Semashko: Department of Computer Science, Sumy State University, 40007 Sumy, Ukraine
Data, 2018, vol. 4, issue 1, 1-14
Abstract:
Trainable visual navigation systems based on deep learning demonstrate potential for robustness of onboard camera parameters and challenging environment. However, a deep model requires substantial computational resources and large labelled training sets for successful training. Implementation of the autonomous navigation and training-based fast adaptation to the new environment for a compact drone is a complicated task. The article describes an original model and training algorithms adapted to the limited volume of labelled training set and constrained computational resource. This model consists of a convolutional neural network for visual feature extraction, extreme-learning machine for estimating the position displacement and boosted information-extreme classifier for obstacle prediction. To perform unsupervised training of the convolution filters with a growing sparse-coding neural gas algorithm, supervised learning algorithms to construct the decision rules with simulated annealing search algorithm used for finetuning are proposed. The use of complex criterion for parameter optimization of the feature extractor model is considered. The resulting approach performs better trajectory reconstruction than the well-known ORB-SLAM. In particular, for sequence 7 from the KITTI dataset, the translation error is reduced by nearly 65.6% under the frame rate 10 frame per second. Besides, testing on the independent TUM sequence shot outdoors produces a translation error not exceeding 6% and a rotation error not exceeding 3.68 degrees per 100 m. Testing was carried out on the Raspberry Pi 3+ single-board computer.
Keywords: navigation; visual odometry; convolutional neural network; neural gas; information criterion; extreme learning (search for similar items in EconPapers)
JEL-codes: C8 C80 C81 C82 C83 (search for similar items in EconPapers)
Date: 2018
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2306-5729/4/1/4/pdf (application/pdf)
https://www.mdpi.com/2306-5729/4/1/4/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jdataj:v:4:y:2018:i:1:p:4-:d:193685
Access Statistics for this article
Data is currently edited by Ms. Cecilia Yang
More articles in Data from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().