Reinforcement Learning or Active Inference?
Karl J Friston,
Jean Daunizeau and
Stefan J Kiebel
PLOS ONE, 2009, vol. 4, issue 7, 1-13
Abstract:
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Date: 2009
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0006421 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 06421&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0006421
DOI: 10.1371/journal.pone.0006421
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().