EconPapers    
Economics at your fingertips  
 

SIGNIFY: Leveraging Machine Learning and Gesture Recognition for Sign Language Teaching Through a Serious Game

Luca Ulrich (), Giulio Carmassi, Paolo Garelli, Gianluca Lo Presti, Gioele Ramondetti, Giorgia Marullo, Chiara Innocente and Enrico Vezzetti
Additional contact information
Luca Ulrich: Management and Production Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Giulio Carmassi: Biomedical Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Paolo Garelli: Biomedical Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Gianluca Lo Presti: Biomedical Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Gioele Ramondetti: Computer Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Giorgia Marullo: Management and Production Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Chiara Innocente: Management and Production Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy
Enrico Vezzetti: Management and Production Engineering, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy

Future Internet, 2024, vol. 16, issue 12, 1-19

Abstract: Italian Sign Language (LIS) is the primary form of communication for many members of the Italian deaf community. Despite being recognized as a fully fledged language with its own grammar and syntax, LIS still faces challenges in gaining widespread recognition and integration into public services, education, and media. In recent years, advancements in technology, including artificial intelligence and machine learning, have opened up new opportunities to bridge communication gaps between the deaf and hearing communities. This paper presents a novel educational tool designed to teach LIS through SIGNIFY, a Machine Learning-based interactive serious game. The game incorporates a tutorial section, guiding users to learn the sign alphabet, and a classic hangman game that reinforces learning through practice. The developed system employs advanced hand gesture recognition techniques for learning and perfecting sign language gestures. The proposed solution detects and overlays 21 hand landmarks and a bounding box on live camera feeds, making use of an open-source framework to provide real-time visual feedback. Moreover, the study compares the effectiveness of two camera systems: the Azure Kinect, which provides RGB-D information, and a standard RGB laptop camera. Results highlight both systems’ feasibility and educational potential, showcasing their respective advantages and limitations. Evaluations with primary school children demonstrate the tool’s ability to make sign language education more accessible and engaging. This article emphasizes the work’s contribution to inclusive education, highlighting the integration of technology to enhance learning experiences for deaf and hard-of-hearing individuals.

Keywords: hand sign alphabet; social inclusion; machine learning; gesture recognition; gamification; serious game (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/16/12/447/pdf (application/pdf)
https://www.mdpi.com/1999-5903/16/12/447/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:16:y:2024:i:12:p:447-:d:1534238

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:16:y:2024:i:12:p:447-:d:1534238