Graph data science-driven framework to aid auditory and speech impaired individuals by accelerating sign image analysis and knowledge relegation through deep learning technique
R. Akhila Thejaswi (),
Bellipady Shamantha Rai () and
Permanki Guthu Rithesh Pakkala ()
Additional contact information
R. Akhila Thejaswi: Sahyadri College of Engineering & Management, Mangaluru and Affiliated to Visvesvaraya Technological University
Bellipady Shamantha Rai: Sahyadri College of Engineering & Management, Mangaluru and Affiliated to Visvesvaraya Technological University
Permanki Guthu Rithesh Pakkala: Sahyadri College of Engineering & Management, Mangaluru and Affiliated to Visvesvaraya Technological University
International Journal of System Assurance Engineering and Management, 2025, vol. 16, issue 1, No 11, 175-198
Abstract:
Abstract In India, the prevalence of speech and hearing impairments is a major public health concern due to the negative consequences it has on a large number of people. So far, researchers have ignored the linguistic features of sign language and have treated spoken and sign language as directly identical. Thus, to facilitate real-time communication between sign language users and non-sign language users, a state-of-the-art end-to-end system for sign language recognition and translation is needed. The system combines sign language translation (SLT) and sign language recognition (SLR) approaches to ensure precise real-time recognition of sign gestures. A Convolution Neural Network is used in the proposed study as a deep learning model to train a huge dataset of hand gestures and perform image analysis utilizing the MediaPipe library for landmark estimation and identification. The experiment is conducted by a thorough analysis of a large dataset made up of 52,000 sign images. Using graph data science and NetworkX, the SLT module in the proposed model creates a directed weighted graph of Indian Sign Language gestures and their corresponding English translations, which it uses to convert gesture sequences into meaningful English phrases. By using knowledge relegation, the study compares the suggested model with existing models, which show that they are dependable as tools for communication and enable people with speech and hearing impairments to have natural conversations with non-sign language users. It achieves a remarkable accuracy of 98.64%, highlighting the significance and potential of combining SLR and SLT techniques.
Keywords: Sign language recognition; Convolution neural network; Sign language translation; Image analysis; Graph data science; Deep learning (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s13198-024-02598-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:ijsaem:v:16:y:2025:i:1:d:10.1007_s13198-024-02598-z
Ordering information: This journal article can be ordered from
http://www.springer.com/engineering/journal/13198
DOI: 10.1007/s13198-024-02598-z
Access Statistics for this article
International Journal of System Assurance Engineering and Management is currently edited by P.K. Kapur, A.K. Verma and U. Kumar
More articles in International Journal of System Assurance Engineering and Management from Springer, The Society for Reliability, Engineering Quality and Operations Management (SREQOM),India, and Division of Operation and Maintenance, Lulea University of Technology, Sweden
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().