EconPapers    
Economics at your fingertips  
 

COVID-19 Detection Systems Using Deep-Learning Algorithms Based on Speech and Image Data

Ali Bou Nassif, Ismail Shahin, Mohamed Bader, Abdelfatah Hassan and Naoufel Werghi
Additional contact information
Ali Bou Nassif: Centre for Data Analytics and Cybersecurity, Department of Computer Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates
Ismail Shahin: Centre for Data Analytics and Cybersecurity, Department of Electrical Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates
Mohamed Bader: Centre for Data Analytics and Cybersecurity, Department of Electrical Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates
Abdelfatah Hassan: Center for Cyber-Physical Systems, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi 127788, United Arab Emirates
Naoufel Werghi: Center for Cyber-Physical Systems, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi 127788, United Arab Emirates

Mathematics, 2022, vol. 10, issue 4, 1-24

Abstract: The global epidemic caused by COVID-19 has had a severe impact on the health of human beings. The virus has wreaked havoc throughout the world since its declaration as a worldwide pandemic and has affected an expanding number of nations in numerous countries around the world. Recently, a substantial amount of work has been done by doctors, scientists, and many others working on the frontlines to battle the effects of the spreading virus. The integration of artificial intelligence, specifically deep- and machine-learning applications, in the health sector has contributed substantially to the fight against COVID-19 by providing a modern innovative approach for detecting, diagnosing, treating, and preventing the virus. In this proposed work, we focus mainly on the role of the speech signal and/or image processing in detecting the presence of COVID-19. Three types of experiments have been conducted, utilizing speech-based, image-based, and speech and image-based models. Long short-term memory (LSTM) has been utilized for the speech classification of the patient’s cough, voice, and breathing, obtaining an accuracy that exceeds 98%. Moreover, CNN models VGG16, VGG19, Densnet201, ResNet50, Inceptionv3, InceptionResNetV2, and Xception have been benchmarked for the classification of chest X-ray images. The VGG16 model outperforms all other CNN models, achieving an accuracy of 85.25% without fine-tuning and 89.64% after performing fine-tuning techniques. Furthermore, the speech–image-based model has been evaluated using the same seven models, attaining an accuracy of 82.22% by the InceptionResNetV2 model. Accordingly, it is inessential for the combined speech–image-based model to be employed for diagnosis purposes since the speech-based and image-based models have each shown higher terms of accuracy than the combined model.

Keywords: convolution neural network; COVID-19; deep learning; long short-term memory; Mel-frequency cepstral coefficients; X-ray image (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/4/564/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/4/564/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:4:p:564-:d:747405

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:4:p:564-:d:747405