EconPapers    
Economics at your fingertips  
 

AUTOMATED GESTURE RECOGNITION USING ARTIFICIAL RABBITS OPTIMIZATION WITH DEEP LEARNING FOR ASSISTING VISUALLY CHALLENGED PEOPLE

Radwa Marzouk, Ghadah Aldehim, Mohammed Abdullah Al-Hagery, Anwer Mustafa Hilal and Amani A. Alneil
Additional contact information
Radwa Marzouk: Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia†Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
Ghadah Aldehim: Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia
Mohammed Abdullah Al-Hagery: ��Department of Computer Science, College of Computer, Qassim University, Saudi Arabia
Anwer Mustafa Hilal: �Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj, Saudi Arabia
Amani A. Alneil: �Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj, Saudi Arabia

FRACTALS (fractals), 2025, vol. 33, issue 03, 1-12

Abstract: Gesture recognition technology has become a transformative solution to enhance accessibility for people with vision impairments. This innovation enables the interpretation of body and hand movements, transforming them into meaningful information or commands by applying advanced computer vision sensors and algorithms. This technology serves as an intuitive interface for the visually impaired, enabling them to access information seamlessly, navigate digital devices, and interact with their surroundings, fostering more independence and inclusivity in day-to-day activities. Gesture recognition solutions using deep learning (DL) leverage neural networks (NN) to understand intricate patterns in human gestures. DL algorithms can identify and classify different hand and body movements accurately by training on extensive datasets. Therefore, this study develops an automated gesture recognition using artificial rabbits optimization with deep learning (AGR-ARODL) technique for assisting visually challenged persons. The AGR-ARODL technique mainly intends to assist visually challenged people in the recognition of various kinds of hand gestures. In accomplishing this, the AGR-ARODL technique primarily pre-processes the input images using a median filtering (MF) approach. Next, the AGR-ARODL technique involves the SE-ResNet-50 model to derive feature patterns and its hyperparameter selection process is carried out by the use of the artificial rabbit optimization (ARO) algorithm. The AGR-ARODL technique applies a deep belief network (DBN) model for the detection of various hand gestures. The simulation results of the AGR-ARODL method are tested under the benchmark gesture recognition dataset. Widespread experimental analysis underscored the betterment of the AGR-ARODL technique compared to recent DL models.

Keywords: Gesture Recognition; Deep Learning; Visually Challenged People; Artificial Rabbits Optimization; Human–Computer Interaction (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0218348X24501317
Access to full text is restricted to subscribers

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wsi:fracta:v:33:y:2025:i:03:n:s0218348x24501317

Ordering information: This journal article can be ordered from

DOI: 10.1142/S0218348X24501317

Access Statistics for this article

FRACTALS (fractals) is currently edited by Tara Taylor

More articles in FRACTALS (fractals) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().

 
Page updated 2025-05-03
Handle: RePEc:wsi:fracta:v:33:y:2025:i:03:n:s0218348x24501317