EconPapers    
Economics at your fingertips  
 

A Nudge to Credible Information as a Countermeasure to Misinformation: Evidence from Twitter

Elina H. Hwang () and Stephanie Lee ()
Additional contact information
Elina H. Hwang: Michael G. Foster School of Business, University of Washington, Seattle, Washington 98195
Stephanie Lee: Michael G. Foster School of Business, University of Washington, Seattle, Washington 98195

Information Systems Research, 2025, vol. 36, issue 1, 621-636

Abstract: Fueled by social media, health misinformation is spreading rapidly across online platforms. Myths, rumors, and false information on vaccines are flourishing, and the aftermath can be disastrous. A more concerning trend is that people are increasingly relying on social media to obtain healthcare information and tending to believe what they read on social media. Given the serious consequences of misinformation, this study aims to explore the efficacy of a potential cure for the infodemic we face. Specifically, we focus on a countermeasure that Twitter used, which is to nudge users toward credible information when users search topics for which erroneous information is rampant. This Twitter’s policy is unique, in that the intervention is not about censorship but about redirecting users away from false information and toward facts. Our analysis uses 1,468 news articles that contain misinformation about health topics such as measles, vaccines, and cancer. Our analysis reveals that Twitter’s nudging policy reduces misinformation diffusion. After the policy introduction, a news article that contains misinformation is less likely to start a diffusion process on Twitter. In addition, tweets that contain a link to misinformation articles are less likely to be retweeted, quoted, or replied to, which leads to a significant reduction in the aggregated number of tweets each misinformation article attracts. We further uncover that the observed reduction is driven by the decrease both in original tweet posts—those that first introduce misinformation news articles to the Twitter platform—and in those resharing the misinformation, although the reduction is more significant in resharing posts. Last, we find that the effect is driven primarily by a decrease in human-like accounts that share links to unverified claims but not by a decrease in activities by bot-like accounts. Our findings suggest that a misinformation policy that relies on a nudge to a credible source rather than on censorship can suppress misinformation diffusion.

Keywords: misinformation; diffusion; online platform; countermeasure (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2021.0491 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:36:y:2025:i:1:p:621-636

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-04-05
Handle: RePEc:inm:orisre:v:36:y:2025:i:1:p:621-636