EXPLAINABILITY OF NEURAL NETWORK CLUSTERING IN INTERPRETING THE COVID-19 EMERGENCY DATA
Zhenhua Yu (),
Ayesha Sohail,
Taher A. Nofal and
JOÃO MANUEL R. S. Tavares
Additional contact information
Zhenhua Yu: Institute of Systems Security and Control, College of Computer Science and Technology, Xi’an University of Science and Technology, Xi’an 710054, P. R. China
Ayesha Sohail: ��Department of Mathematics, Comsats University Islamabad, Lahore Campus, Lahore, Pakistan
Taher A. Nofal: ��Department of Mathematics and Statistics, Faculty of Science, Taif University, Taif, Saudi Arabia
JOÃO MANUEL R. S. Tavares: �Instituto de Ciência e Inovação em Engenharia, Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Porto, Portugal
FRACTALS (fractals), 2022, vol. 30, issue 05, 1-12
Abstract:
Among other hospitalization causes and cases, the clinical emergency is a critical case and the data of the reporting patients are biased as well as poorly managed due to the chaotic situation. The world has faced chaos over the past year due to the frequent waves of COVID-19 and the resulting emergencies. The data banks, linked with the clinical emergencies require serious quantitative and qualitative analysis to drive interpretable conclusions for necessary future emergency measures and to develop explainable artificial intelligence tools. This important procedure involves the clear understanding of the data patterns and topologies, which is a great challenge for the multidimensional data sets. Mathematically, the topological mapping can resolve this problem by mapping higher-dimensional data to two-dimensional representation, based on the overall association. Proper data mining and pattern recognition can help in improving the rapid patients admission, in providing the medical resources timely and in proper patient administration. In this paper, the importance of self-organizing maps, to interpret the hospital data, particularly for the COVID-19 epidemic is discussed in detail. Important variables are identified with the aid of networks and mappings.
Keywords: Explainable Artificial Intelligence; Self-Organizing Maps; Accuracy and Precision; Time Series Modeling (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0218348X22401223
Access to full text is restricted to subscribers
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wsi:fracta:v:30:y:2022:i:05:n:s0218348x22401223
Ordering information: This journal article can be ordered from
DOI: 10.1142/S0218348X22401223
Access Statistics for this article
FRACTALS (fractals) is currently edited by Tara Taylor
More articles in FRACTALS (fractals) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().