The value of confidence: Confidence prediction errors drive value-based learning in the absence of external feedback
Lena Esther Ptasczynski,
Isa Steinecker,
Philipp Sterzer and
Matthias Guggenmos
PLOS Computational Biology, 2022, vol. 18, issue 10, 1-25
Abstract:
Reinforcement learning algorithms have a long-standing success story in explaining the dynamics of instrumental conditioning in humans and other species. While normative reinforcement learning models are critically dependent on external feedback, recent findings in the field of perceptual learning point to a crucial role of internally generated reinforcement signals based on subjective confidence, when external feedback is not available. Here, we investigated the existence of such confidence-based learning signals in a key domain of reinforcement-based learning: instrumental conditioning. We conducted a value-based decision making experiment which included phases with and without external feedback and in which participants reported their confidence in addition to choices. Behaviorally, we found signatures of self-reinforcement in phases without feedback, reflected in an increase of subjective confidence and choice consistency. To clarify the mechanistic role of confidence in value-based learning, we compared a family of confidence-based learning models with more standard models predicting either no change in value estimates or a devaluation over time when no external reward is provided. We found that confidence-based models indeed outperformed these reference models, whereby the learning signal of the winning model was based on the prediction error between current confidence and a stimulus-unspecific average of previous confidence levels. Interestingly, individuals with more volatile reward-based value updates in the presence of feedback also showed more volatile confidence-based value updates when feedback was not available. Together, our results provide evidence that confidence-based learning signals affect instrumentally learned subjective values in the absence of external feedback.Author summary: Reinforcement learning models successfully simulate value-based learning processes (e.g., “How worthwhile is it to choose the same option again?”) when external reward feedback is provided (e.g., drops of sweet liquids or money). But does learning stagnate if such feedback is no longer provided? Recently, a number of studies have shown that subjective confidence can likewise act as an internal reward signal, when external feedback is not available. These results are in line with the intuitive experience that being confident about choices and actions comes with a satisfying feeling of accomplishment. To better understand the role of confidence in value-based learning, we designed a study in which participants had to learn the value of choice options in phases with and without external feedback. Behaviorally, we found signatures of self-reinforcement, such as increased confidence and choice consistency, in phases without feedback. To examine the underlying mechanisms, we compared computational models, in which learning was guided by confidence signals, with more standard reinforcement learning models. A statistical comparison of these models showed that a confidence-based model in which generic confidence prediction errors (e.g., “Am I as confident as expected?”) guide learning indeed outperformed the standard models.
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010580 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 10580&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1010580
DOI: 10.1371/journal.pcbi.1010580
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().