Detecting misinformation through framing theory: the frame element-based model
Guan Wang (),
Rebecca Frederick (),
Jinglong Duan (),
William B. L. Wong (),
Verica Rupar (),
Weihua Li () and
Quan Bai ()
Additional contact information
Guan Wang: Auckland University of Technology
Rebecca Frederick: Auckland University of Technology
Jinglong Duan: Auckland University of Technology
William B. L. Wong: Auckland University of Technology
Verica Rupar: Auckland University of Technology
Weihua Li: Auckland University of Technology
Quan Bai: University of Tasmania
Journal of Computational Social Science, 2025, vol. 8, issue 3, No 18, 25 pages
Abstract:
Abstract In this paper, we delve into the rapidly evolving challenge of misinformation detection, specifically focusing on the nuanced manipulation of narrative frames, an under-explored area within the Artificial Intelligence (AI) community. The potential for Generative AI models to generate misleading narratives highlights the urgency of addressing this issue. Drawing from communication and framing theories, we posit that the presentation or ‘framing’ of accurate information can dramatically alter its interpretation, potentially leading to misinformation. In particular, the intricate user interaction in social networks plays an important role in this process, as these platforms provide an unsupervised environment for disseminating misinformation among individuals. We highlight this issue through real-world examples, demonstrating how shifts in narrative frames can transmute fact-based information into misinformation. To tackle this challenge, we propose an innovative approach that leverages the power of pre-trained large language models and deep neural networks to detect misinformation originating from accurate facts, which are portrayed under different frames. These advanced AI techniques offer unprecedented capabilities in identifying complex patterns within unstructured data, critical for examining the subtleties of narrative frames. The objective of this paper is to bridge a significant research gap in the AI domain, providing valuable insights and methodologies for tackling framing-induced misinformation, thus contributing to the advancement of responsible and trustworthy AI technologies. Several experiments are conducted, and the experimental results explicitly demonstrate the various impacts of elements of framing theory, thereby proving the rationale for applying framing theory to increase performance in misinformation detection.
Keywords: Misinformation detection; Framing analysis; Framing extraction; Human-centric social good (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s42001-025-00403-w Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jcsosc:v:8:y:2025:i:3:d:10.1007_s42001-025-00403-w
Ordering information: This journal article can be ordered from
http://www.springer. ... iences/journal/42001
DOI: 10.1007/s42001-025-00403-w
Access Statistics for this article
Journal of Computational Social Science is currently edited by Takashi Kamihigashi
More articles in Journal of Computational Social Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().