EconPapers    
Economics at your fingertips  
 

A multidimensional distributional map of future reward in dopamine neurons

Margarida Sousa, Pawel Bujalski, Bruno F. Cruz, Kenway Louie, Daniel C. McNamee and Joseph J. Paton ()
Additional contact information
Margarida Sousa: Champalimaud Centre for the Unknown
Pawel Bujalski: Champalimaud Centre for the Unknown
Bruno F. Cruz: Champalimaud Centre for the Unknown
Kenway Louie: New York University
Daniel C. McNamee: Champalimaud Centre for the Unknown
Joseph J. Paton: Champalimaud Centre for the Unknown

Nature, 2025, vol. 642, issue 8068, 691-699

Abstract: Abstract Midbrain dopamine neurons (DANs) signal reward-prediction errors that teach recipient circuits about expected rewards1. However, DANs are thought to provide a substrate for temporal difference (TD) reinforcement learning (RL), an algorithm that learns the mean of temporally discounted expected future rewards, discarding useful information about experienced distributions of reward amounts and delays2. Here we present time–magnitude RL (TMRL), a multidimensional variant of distributional RL that learns the joint distribution of future rewards over time and magnitude. We also uncover signatures of TMRL-like computations in the activity of optogenetically identified DANs in mice during behaviour. Specifically, we show that there is significant diversity in both temporal discounting and tuning for the reward magnitude across DANs. These features allow the computation of a two-dimensional, probabilistic map of future rewards from just 450 ms of the DAN population response to a reward-predictive cue. Furthermore, reward-time predictions derived from this code correlate with anticipatory behaviour, suggesting that similar information is used to guide decisions about when to act. Finally, by simulating behaviour in a foraging environment, we highlight the benefits of a joint probability distribution of reward over time and magnitude in the face of dynamic reward landscapes and internal states. These findings show that rich probabilistic reward information is learnt and communicated to DANs, and suggest a simple, local-in-time extension of TD algorithms that explains how such information might be acquired and computed.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41586-025-09089-6 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:642:y:2025:i:8068:d:10.1038_s41586-025-09089-6

Ordering information: This journal article can be ordered from
https://www.nature.com/

DOI: 10.1038/s41586-025-09089-6

Access Statistics for this article

Nature is currently edited by Magdalena Skipper

More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-06-20
Handle: RePEc:nat:nature:v:642:y:2025:i:8068:d:10.1038_s41586-025-09089-6