EconPapers    
Economics at your fingertips  
 

Indoor Signs Detection for Visually Impaired People: Navigation Assistance Based on a Lightweight Anchor-Free Object Detector

Yahia Said (), Mohamed Atri, Marwan Ali Albahar, Ahmed Ben Atitallah and Yazan Ahmad Alsariera
Additional contact information
Yahia Said: Remote Sensing Unit, College of Engineering, Northern Border University, Arar 91431, Saudi Arabia
Mohamed Atri: College of Computer Sciences, King Khalid University, Abha 62529, Saudi Arabia
Marwan Ali Albahar: School of Computer Science, Umm Al-Qura University, Mecca 24382, Saudi Arabia
Ahmed Ben Atitallah: Department of Electrical Engineering, College of Engineering, Jouf University, Sakaka 72388, Saudi Arabia
Yazan Ahmad Alsariera: College of Science, Northern Border University, Arar 91431, Saudi Arabia

IJERPH, 2023, vol. 20, issue 6, 1-15

Abstract: Facilitating the navigation of visually impaired people in indoor environments requires detecting indicating signs and informing them. In this paper, we proposed an indoor sign detection based on a lightweight anchor-free object detection model called FAM-centerNet. The baseline model of this work is the centerNet, which is an anchor-free object detection model with high performance and low computation complexity. A Foreground Attention Module (FAM) was introduced to extract target objects in real scenes with complex backgrounds. This module segments the foreground to extract relevant features of the target object using midground proposal and boxes-induced segmentation. In addition, the foreground module provides scale information to improve the regression performance. Extensive experiments on two datasets prove the efficiency of the proposed model for detecting general objects and custom indoor signs. The Pascal VOC dataset was used to test the performance of the proposed model for detecting general objects, and a custom dataset was used for evaluating the performance in detecting indoor signs. The reported results have proved the efficiency of the proposed FAM in enhancing the performance of the baseline model.

Keywords: navigation assistance; visually impaired; disabilities; deep learning; object detection; indoor signs (search for similar items in EconPapers)
JEL-codes: I I1 I3 Q Q5 (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1660-4601/20/6/5011/pdf (application/pdf)
https://www.mdpi.com/1660-4601/20/6/5011/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jijerp:v:20:y:2023:i:6:p:5011-:d:1095097

Access Statistics for this article

IJERPH is currently edited by Ms. Jenna Liu

More articles in IJERPH from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jijerp:v:20:y:2023:i:6:p:5011-:d:1095097