EconPapers    
Economics at your fingertips  
 

AUTOMATED GESTURE RECOGNITION USING APPLIED LINGUISTICS WITH DATA-DRIVEN DEEP LEARNING FOR ARABIC SPEECH TRANSLATION

Saad Alahmari, Badriyya B. Al-Onazi, Nouf J. Aljohani, Khadija Abdullah Alzahrani, Faiz Abdullah Alotaibi, Manar Almanea, Mrim M. Alnfiai and Hany Mahgoub
Additional contact information
Saad Alahmari: Department of Computer Science, Applied College, Northern Border University, Arar, Saudi Arabia
Badriyya B. Al-Onazi: ��Department of Arabic Language and Literature, College of Humanities and Social Sciences, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia
Nouf J. Aljohani: ��Department of Language and Translation, University of Jeddah, Jeddah, Saudi Arabia
Khadija Abdullah Alzahrani: �Saudi Arabia Ministry of Education, Riyadh, Saudi Arabia
Faiz Abdullah Alotaibi: �Department of Information Science, College of Humanities and Social Sciences, King Saud University, P. O. Box 28095, Riyadh 11437, Saudi Arabia
Manar Almanea: ��Department of English, College of Languages and Translation, Imam Mohammad Ibn Saud Islamic University, Riyadh 11432, Saudi Arabia
Mrim M. Alnfiai: *Department of Information Technology, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia
Hany Mahgoub: ��†Department of Computer Science, Applied College at Mahayil, King Khalid University, Abha, Asir, Saudi Arabia‡‡Computer Science Department, Faculty of Computers and Information, Menoufia University, Menoufia, Egypt

FRACTALS (fractals), 2024, vol. 32, issue 09n10, 1-12

Abstract: Gesture recognition for Arabic speech translation includes developing advanced technologies that correctly translate body and hand movements corresponding to Arabic sign language (ArSL) into spoken Arabic. This leverages machine learning and computer vision techniques in complex systems simulation platforms to scrutinize the gestures utilized in ArSL, detecting mild differences in facial expressions, hand shapes, and movements. Sign Language Recognition (SLR) is paramount in assisting communication for the Deaf and Hard-of-Hearing communities. It includes using vision-based methods and Surface Electromyography (sEMG) signals. The sEMG signal is crucial for recognizing hand gestures and capturing muscular activities in sign language. Researchers have comprehensively shown the capability of EMG signals to approach specific details, mainly in classifying hand gestures. This progression is a stimulating feature in extracting the interpretation and recognition of sign languages and investigating the phonology of signed language. Leveraging machine learning algorithms and signal processing techniques in complex systems simulation platforms, researchers aim to extract relevant traits from the sEMG signals that correspond to different ArSL gestures. This study introduces an Enhanced Dwarf Mongoose Algorithm with a Deep Learning-Driven Arabic Sign Language Detection (EDMODL-ASLD) technique on sEMG data. In the initial phase, the presented EDMODL-ASLD model is subjected to data preprocessing to change the input sEMG data into an attuned format. In the next stage, feature extraction with fractal theories is used to gather relevant and nonredundant data from the EMG window to construct a feature vector. In this study, the absolute envelope (AE), energy (E), root-mean square (RMS), standard deviation (STD), and mean absolute value (MAV) are the five time-domain extracted features for the EMG window observation. Meanwhile, the dilated convolutional long short-term memory (ConvLSTM) technique is used to identify distinct sign languages. To improve the results of the dilated ConvLSTM model, the hyperparameter selection process is executed using the EDMO model. To illustrate the significance of the EDMODL-ASLD technique, a brief experimental validation is made on the Arabic SLR database. The experimental validation of the EDMODL-ASLD technique portrayed a superior accuracy value of 96.47% over recent DL approaches.

Keywords: Speech Translation; Sign Language Recognition; Deep Learning; Dwarf Mongoose Algorithm; Applied Linguistics; Fractal; Long Short-Term Memory; Complex systems (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0218348X25400456
Access to full text is restricted to subscribers

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400456

Ordering information: This journal article can be ordered from

DOI: 10.1142/S0218348X25400456

Access Statistics for this article

FRACTALS (fractals) is currently edited by Tara Taylor

More articles in FRACTALS (fractals) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().

 
Page updated 2025-03-20
Handle: RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400456