Video-Assisted Global Positioning in Terrain Navigation with Known Landmarks
Guna Seetharaman and
Ha V. Le
Additional contact information
Guna Seetharaman: Electrical and Computer Engineering Division, Air Force Institute of Technology, Wright-Patterson AFB, Ohio
Ha V. Le: Department of Electrical and Computer Engineering, College of Technology, Vietnam National University, Cau Giay, Hanoi, Vietnam
International Journal of Distributed Sensor Networks, 2006, vol. 2, issue 2, 103-119
Abstract:
We present a rigorous geometric analysis of the computation of the global positions of an airborne video camera and ground based objects using aerial images of known landmarks. This has also been known as the perspective-n-point (PnP) problem. A robust Hough transform-like method, facilitated by a class of CORDIC-structured computations is developed to find the camera position followed by a method of computing the position of a ground object from images of that object and three known landmarks. The results enable fast and effective visual terrain navigation of aerial surveillance systems when the global positioning and inertial navigation sensors become faulty, inaccurate, or dysfunctional. These new hardware implementable algorithms can also be used with MEMS based INS sensors through a multisensory fusion process.
Keywords: Visual Navigation; GPS Fault Tolerance; Unmanned Air Vehicles; Low-altitude Aerial Imagery (search for similar items in EconPapers)
Date: 2006
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1080/15501320500201235 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:intdis:v:2:y:2006:i:2:p:103-119
DOI: 10.1080/15501320500201235
Access Statistics for this article
More articles in International Journal of Distributed Sensor Networks
Bibliographic data for series maintained by SAGE Publications ().