Nonparametric Adaptive Bayesian Stochastic Control Under Model Uncertainty
Tao Chen and
Papers from arXiv.org
In this paper we propose a new methodology for solving a discrete time stochastic Markovian control problem under model uncertainty. By utilizing the Dirichlet process, we model the unknown distribution of the underlying stochastic process as a random probability measure and achieve online learning in a Bayesian manner. Our approach integrates optimizing and dynamic learning. When dealing with model uncertainty, the nonparametric framework allows us to avoid model misspecification that usually occurs in other classical control methods. Then, we develop a numerical algorithm to handle the infinitely dimensional state space in this setup and utilizes Gaussian process surrogates to obtain a functional representation of the value function in the Bellman recursion. We also build separate surrogates for optimal control to eliminate repeated optimizations on out-of-sample paths and bring computational speed-ups. Finally, we demonstrate the financial advantages of the nonparametric Bayesian framework compared to parametric approaches such as strong robust and time consistent adaptive.
New Economics Papers: this item is included in nep-cmp and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
http://arxiv.org/pdf/2011.04804 Latest version (application/pdf)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2011.04804
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().