A survey of explainable AI techniques for detection of fake news and hate speech on social media platforms
Vaishali U. Gongane (),
Mousami V. Munot () and
Alwin D. Anuse ()
Additional contact information
Vaishali U. Gongane: SCTR’s Pune Institute of Computer Technology, SPPU
Mousami V. Munot: SCTR’s Pune Institute of Computer Technology, SPPU
Alwin D. Anuse: Dr Vishwanath Karad MIT-WPU
Journal of Computational Social Science, 2024, vol. 7, issue 1, No 23, 587-623
Abstract:
Abstract Artificial intelligence (AI) is a computing field that has played a pivotal role in delivering technological revolutions in various sectors like business, healthcare, finance, social networking, entertainment, and news. With an inimitable ability of AI to process and analyze any form of data (image, text, audio, and video) with the help of high-power computing machines, it is considered as an integral part of Industry 4.0. Social media and internet are another form of technology advancement in digital communication that has created a tremendous impact in the society. Social networking sites like Facebook, Twitter, YouTube, and Instagram provide a platform for people to freely express their thoughts and views. The past decade is witnessing an awful side of social media through the dissemination of online fake news and hate speech content. Social networking sites make use of AI tools to tackle with the increasing hate speech and fake news content. Natural language processing (NLP), a field of AI, include techniques that process vast amount of online content accompanied with machine learning (ML) and deep learning (DL) algorithms that learn the representations of data for detection, classification, and prediction tasks. AI algorithms are considered as “black box” where the decisions made by the algorithms are sometimes biased and lack in transparency. Many state-of-art AI algorithms show low recall and low F1-score metric for diverse forms of hate speech and fake news. The inadequacy of explanations about the decisions made by AI for classification and prediction task is a crucial challenge that needs to be considered. Explainable AI (XAI) is an upcoming research field that has added a new dimension to AI which is “Explainability”. XAI shows a unique ability of interpreting and explaining the decisions made by ML models. This feature of XAI is deployed in various applications like autonomous vehicles and medical diagnostics. In context of social media content, XAI plays an important role in interpreting the diverse forms of hate speech and fake news. Literature studies have reported various XAI models like SHAP (SHapley Additive exPlanations) and Local Interpretable Model-agnostic Explanations (LIME) for detection of hate speech and fake news content. The paper aims to explore XAI models for detection and classification of hate speech and fake news on social media platforms as reported in the research literature. This paper provides a review of evaluation metrics that quantify the XAI technique used in hate speech and fake news detection. The paper leaps into the technical and ethical challenges involved when using XAI models to handle the nuance of online text published on social media platforms.
Keywords: Explainable AI; LIME; SHAP; Hate speech; Fake news (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1007/s42001-024-00248-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00248-9
Ordering information: This journal article can be ordered from
http://www.springer. ... iences/journal/42001
DOI: 10.1007/s42001-024-00248-9
Access Statistics for this article
Journal of Computational Social Science is currently edited by Takashi Kamihigashi
More articles in Journal of Computational Social Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().