Learning the structure of the world: The adaptive nature of state-space and action representations in multi-stage decision-making
Amir Dezfouli and
Bernard W Balleine
PLOS Computational Biology, 2019, vol. 15, issue 9, 1-22
Abstract:
State-space and action representations form the building blocks of decision-making processes in the brain; states map external cues to the current situation of the agent whereas actions provide the set of motor commands from which the agent can choose to achieve specific goals. Although these factors differ across environments, it is currently unknown whether or how accurately state and action representations are acquired by the agent because previous experiments have typically provided this information a priori through instruction or pre-training. Here we studied how state and action representations adapt to reflect the structure of the world when such a priori knowledge is not available. We used a sequential decision-making task in rats in which they were required to pass through multiple states before reaching the goal, and for which the number of states and how they map onto external cues were unknown a priori. We found that, early in training, animals selected actions as if the task was not sequential and outcomes were the immediate consequence of the most proximal action. During the course of training, however, rats recovered the true structure of the environment and made decisions based on the expanded state-space, reflecting the multiple stages of the task. Similarly, we found that the set of actions expanded with training, although the emergence of new action sequences was sensitive to the experimental parameters and specifics of the training procedure. We conclude that the profile of choices shows a gradual shift from simple representations to more complex structures compatible with the structure of the world.Author summary: Everyday decision-making tasks typically require taking multiple actions and passing through multiple states before reaching desired goals. Such states constitute the state-space of the task. Here we show that, contrary to current assumptions, the state-space is not static but rather expands during training as subjects discover new states that help them efficiently solve the task. Similarly, within the same task, we show that subjects initially only consider taking simple actions, but as training progresses the set of actions can expand to include useful action sequences that reach the goal directly by passing through multiple states. These results provide evidence that state-space and action representations are not static but are acquired and then adapted to reflect the structure of the world.
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007334 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 07334&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1007334
DOI: 10.1371/journal.pcbi.1007334
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().