EconPapers    
Economics at your fingertips  
 

A Nonintrusive and Real-Time Classification Method for Driver’s Gaze Region Using an RGB Camera

Huili Shi, Longfei Chen, Xiaoyuan Wang, Gang Wang and Quanzheng Wang
Additional contact information
Huili Shi: College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266101, China
Longfei Chen: College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266101, China
Xiaoyuan Wang: College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266101, China
Gang Wang: College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266101, China
Quanzheng Wang: College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266101, China

Sustainability, 2022, vol. 14, issue 1, 1-16

Abstract: Driver distraction has become a leading cause of traffic crashes. Visual distraction has the most direct impact on driving safety among various driver distractions. If the driver’s line of sight deviates from the road in front, there will be a high probability of visual distraction. A nonintrusive and real-time classification method for driver’s gaze region is proposed. A Multi-Task Convolutional Neural Network (MTCNN) face detector is used to collect the driver’s face image, and the driver’s gaze direction can be detected with a full-face appearance-based gaze estimation method. The driver’s gaze region is classified by the model trained through the machine learning algorithms such as Support Vector Machines (SVM), Random Forest (RF), and K-Nearest Neighbors (KNN). The simulated experiment and the real vehicle experiment were conducted to test the method. The results show that it has good performance on gaze region classification and strong robustness to complex environments. The models in this paper are all lightweight networks, which can meet the accuracy and speed requirements for the tasks. The method can be a good help for further exploring the visual distraction state level and exert an influence on the research of driving behavior.

Keywords: driver distraction; visual distraction; gaze estimation; traffic safety; machine learning (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2071-1050/14/1/508/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/1/508/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:1:p:508-:d:717188

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:14:y:2022:i:1:p:508-:d:717188