EconPapers    
Economics at your fingertips  
 

An Autonomous Framework for Real-Time Wrong-Way Driving Vehicle Detection from Closed-Circuit Televisions

Pintusorn Suttiponpisarn, Chalermpol Charnsripinyo, Sasiporn Usanavasin () and Hiro Nakahara
Additional contact information
Pintusorn Suttiponpisarn: TAIST Tokyo Tech, ICTES Program, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani 12120, Thailand
Chalermpol Charnsripinyo: National Electronics and Computer Technology Center, National Science and Technology Development Agency, Pathum Thani 12120, Thailand
Sasiporn Usanavasin: School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani 12120, Thailand
Hiro Nakahara: Department of Information and Communications Engineering, Tokyo Institute of Technology, Tokyo 152-8550, Japan

Sustainability, 2022, vol. 14, issue 16, 1-32

Abstract: Around 1.3 million people worldwide die each year because of road traffic crashes. There are many reasons which cause accidents, and driving in the wrong direction is one of them. In our research, we developed an autonomous framework called WrongWay-LVDC that detects wrong-way driving vehicles from closed-circuit television (CCTV) videos. The proposed WrongWay-LVDC provides several helpful features such as lane detection, correct direction validation, detecting wrong-way driving vehicles, and image capturing features. In this work, we proposed three main contributions: first, the improved algorithm for road lane boundary detection on CCTV (called improved RLB-CCTV) using the image processing technique. Second is the Distance-Based Direction Detection (DBDD) algorithm that uses the deep learning method, where the system validates and detects wrong-driving vehicles. Lastly, the Inside Boundary Image (IBI) capturing feature algorithm captures the most precise shot of the wrong-way-of-driving vehicles. As a result, the framework can run continuously and output the reports for vehicles’ driving behaviors in each area. The accuracy of our framework is 95.23%, as we tested with several CCTV videos. Moreover, the framework can be implemented on edge devices with real-time speed for functional implementation and detection in various areas.

Keywords: image processing; deep learning; computer vision; YOLOv4-Tiny; FastMOT; wrong-way driving; lane detection; Hough transform (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/14/16/10232/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/16/10232/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:16:p:10232-:d:890713

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:14:y:2022:i:16:p:10232-:d:890713