EconPapers    
Economics at your fingertips  
 

Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach

Kyuhan Lee () and Sudha Ram ()
Additional contact information
Kyuhan Lee: Management Information Systems, Korea University Business School, Seoul 02841, South Korea
Sudha Ram: Management Information Systems, University of Arizona, Tucson, Arizona 85721

Information Systems Research, 2024, vol. 35, issue 2, 890-907

Abstract: In today’s world, where online information is proliferating in an unprecedented way, a significant challenge is whether to believe the information we encounter. Ironically, this flood of information provides us with an opportunity to combat false claims by understanding their nature. That is, with the help of machine learning, it is now possible to effectively capture the characteristics of false information by analyzing massive amounts of false claims published online. These methods, however, have neglected the nature of human argumentation, delegating the process of making inferences of the truth to the black box of neural networks. This has created several challenges (namely latent text representations containing entangled syntactic and semantic information, an irrelevant part of text being considered when abstracting text as a latent vector, and counterintuitive model explanation). To resolve these issues, based on Toulmin’s model of argumentation, we propose a computational framework that helps machine learning for false information identification (FII) understand the connection between a claim (whose veracity needs to be verified) and evidence (which contains information to support or refute the claim). Specifically, we first build a word network of a claim and evidence reflecting their syntaxes and convert it into a signed word network using their semantics. The structural balance of this word network is then calculated as a proxy metric to determine the consistency between a claim and evidence. The consistency level is fed into machine learning as input, providing information for verifying claim veracity and explaining the model’s decision making. The two experiments for testing model performance and explainability reveal that our framework shows stronger performance and better explainability, outperforming cutting-edge methods and presenting positive effects on human task performance, trust in algorithms, and confidence in decision making. Our results shed new light on the growing field of automated FII.

Keywords: false information; fake news; explainable deep learning; machine learning; natural language processing; argumentation theory; structural balance theory (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2020.0097 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:35:y:2024:i:2:p:890-907

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:orisre:v:35:y:2024:i:2:p:890-907