FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
Yang Li,
Zhi Chen,
Steve Y. Yang and
Ruixun Zhang
Papers from arXiv.org
Abstract:
Traditional stochastic control methods in finance rely on simplifying assumptions that often fail in real world markets. While these methods work well in specific, well defined scenarios, they underperform when market conditions change. We introduce FinFlowRL, a novel framework for financial stochastic control that combines imitation learning with reinforcement learning. The framework first pretrains an adaptive meta policy by learning from multiple expert strategies, then finetunes it through reinforcement learning in the noise space to optimize the generation process. By employing action chunking, that is generating sequences of actions rather than single decisions, it addresses the non Markovian nature of financial markets. FinFlowRL consistently outperforms individually optimized experts across diverse market conditions.
Date: 2025-09
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2509.17964 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2509.17964
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().