EconPapers    
Economics at your fingertips  
 

Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection

Susrutha Babu Sukhavasi, Suparshya Babu Sukhavasi, Khaled Elleithy, Ahmed El-Sayed and Abdelrahman Elleithy
Additional contact information
Susrutha Babu Sukhavasi: Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
Suparshya Babu Sukhavasi: Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
Khaled Elleithy: Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
Ahmed El-Sayed: Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
Abdelrahman Elleithy: Department of Computer Science, William Paterson University, Wayne, NJ 07470, USA

IJERPH, 2022, vol. 19, issue 4, 1-23

Abstract: Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers’ expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.

Keywords: deep neural networks; advanced driver assistance systems (ADAS); face detection; K.L.T.; MTCNN; facial expression recognition; driver emotion detection; DeepNet; machine learning (search for similar items in EconPapers)
JEL-codes: I I1 I3 Q Q5 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1660-4601/19/4/2352/pdf (application/pdf)
https://www.mdpi.com/1660-4601/19/4/2352/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jijerp:v:19:y:2022:i:4:p:2352-:d:752545

Access Statistics for this article

IJERPH is currently edited by Ms. Jenna Liu

More articles in IJERPH from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jijerp:v:19:y:2022:i:4:p:2352-:d:752545