Mechanisms of mistrust: A Bayesian account of misinformation learning
Lion Schulz,
Yannick Streicher,
Eric Schulz,
Rahul Bhui and
Peter Dayan
PLOS Computational Biology, 2025, vol. 21, issue 5, 1-26
Abstract:
From the intimate realm of personal interactions to the sprawling arena of political discourse, discerning the trustworthy from the dubious is crucial. Here, we present a novel behavioral task and accompanying Bayesian models that allow us to study key aspects of this learning process in a tightly controlled setting. In our task, participants are confronted with several different types of (mis-)information sources, ranging from ones that lie to ones with biased reporting, and have to learn these attributes under varying degrees of feedback. We formalize inference in this setting as a doubly Bayesian learning process where agents simultaneously learn about the ground truth as well as the qualities of an information source reporting on this ground truth. Our model and detailed analyses reveal how participants can generally follow Bayesian learning dynamics, highlighting a basic human ability to learn about diverse information sources. This learning is also reflected in explicit trust reports about the sources. We additionally show how participants approached the inference problem with priors that held sources to be helpful. Finally, when outside feedback was noisier, participants still learned along Bayesian lines but struggled to pick up on biases in information. Our work pins down computationally the generally impressive human ability to learn the trustworthiness of information sources while revealing minor fault lines when it comes to noisier environments and news sources with a slant.Author summary: We are bombarded with information. But how do we learn whom to believe and whom to mistrust? For instance, how do we come to trust one news source’s report, while believing that another is biased, produces only useless noise, or might even be lying? And how do we incorporate such possibilities when updating our beliefs? Our work offers a computational and empirical perspective on this learning process. We developed a novel and well-controlled task that allows us to characterize human learning about a host of information sources. We show how people can sometimes be remarkably able to discern lying and helpful sources, even when receiving only uncertain outside feedback. We also show how participants need clear feedback to learn about a news provider’s slant.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012814 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 12814&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1012814
DOI: 10.1371/journal.pcbi.1012814
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().