LEARNING, EXPLORATION AND CHAOTIC POLICIES
Alexei B. Potapov () and
M. K. Ali ()
Additional contact information
Alexei B. Potapov: Department of Physics, The University of Lethbridge, 4401 University Dr. W Lethbridge, Alberta T1K 3M4, Canada
M. K. Ali: Department of Physics, The University of Lethbridge, 4401 University Dr. W Lethbridge, Alberta T1K 3M4, Canada
International Journal of Modern Physics C (IJMPC), 2000, vol. 11, issue 07, 1455-1464
Abstract:
We consider different versions of exploration in reinforcement learning. For the test problem, we use navigation in a shortcut maze. It is shown that chaotic ∊-greedy policy may be as efficient as a random one. The best results were obtained with a model chaotic neuron. Therefore, exploration strategy can be implemented in a deterministic learning system such as a neural network.
Keywords: Reinforcement Learning; Exploration; Chaos; Neural Networks (search for similar items in EconPapers)
Date: 2000
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0129183100001309
Access to full text is restricted to subscribers
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wsi:ijmpcx:v:11:y:2000:i:07:n:s0129183100001309
Ordering information: This journal article can be ordered from
DOI: 10.1142/S0129183100001309
Access Statistics for this article
International Journal of Modern Physics C (IJMPC) is currently edited by H. J. Herrmann
More articles in International Journal of Modern Physics C (IJMPC) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().