An Efficient Approach to Monocular Depth Estimation for Autonomous Vehicle Perception Systems
Mehrnaz Farokhnejad Afshar,
Zahra Shirmohammadi (),
Seyyed Amir Ali Ghafourian Ghahramani,
Azadeh Noorparvar and
Ali Mohammad Afshin Hemmatyar
Additional contact information
Mehrnaz Farokhnejad Afshar: Department of Computer Science and Engineering, Sharif University of Technology, Tehran 14588-89694, Iran
Zahra Shirmohammadi: Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran 16788-15811, Iran
Seyyed Amir Ali Ghafourian Ghahramani: Department of Computer Science and Engineering, Sharif University of Technology, Tehran 14588-89694, Iran
Azadeh Noorparvar: Department of Computer Science and Engineering, Sharif University of Technology, Tehran 14588-89694, Iran
Ali Mohammad Afshin Hemmatyar: Department of Computer Science and Engineering, Sharif University of Technology, Tehran 14588-89694, Iran
Sustainability, 2023, vol. 15, issue 11, 1-21
Abstract:
Depth estimation is critical for autonomous vehicles (AVs) to perceive their surrounding environment. However, the majority of current approaches rely on costly sensors, making wide-scale deployment or integration with present-day transportation difficult. This issue highlights the camera as the most affordable and readily available sensor for AVs. To overcome this limitation, this paper uses monocular depth estimation as a low-cost, data-driven strategy for approximating depth from an RGB image. To achieve low complexity, we approximate the distance of vehicles within the frontal view in two stages: firstly, the YOLOv7 algorithm is utilized to detect vehicles and their front and rear lights; secondly, a nonlinear model maps this detection to the corresponding radial depth information. It is also demonstrated how the attention mechanism can be used to enhance detection precision. Our simulation results show an excellent blend of accuracy and speed, with the mean squared error converging to 0.1. The results of defined distance metrics on the KITTI dataset show that our approach is highly competitive with existing models and outperforms current state-of-the-art approaches that only use the detected vehicle’s height to determine depth.
Keywords: depth estimation; autonomous vehicle; YOLOv7; object detection; perception systems (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/2071-1050/15/11/8897/pdf (application/pdf)
https://www.mdpi.com/2071-1050/15/11/8897/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:15:y:2023:i:11:p:8897-:d:1161021
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().