FinRL-DeepSeek: LLM-Infused Risk-Sensitive Reinforcement Learning for Trading Agents
Mostapha Benhenda ()
Additional contact information
Mostapha Benhenda: LAGA - Laboratoire Analyse, Géométrie et Applications - UP8 - Université Paris 8 Vincennes-Saint-Denis - CNRS - Centre National de la Recherche Scientifique - Université Sorbonne Paris Nord
Working Papers from HAL
Abstract:
This paper presents a novel risk-sensitive trading agent combining reinforcement learning and large language models (LLMs). We extend the Conditional Value-at-Risk Proximal Policy Optimization (CPPO) algorithm, by adding risk assessment and trading recommendation signals generated by a LLM from financial news. Our approach is backtested on the Nasdaq-100 index benchmark, using financial news data from the FNSPID dataset and the DeepSeek V3, Qwen 2.5 and Llama 3.3 language models. The code, data, and trading agents are available at: \url{https://github.com/benstaf/FinRL_DeepSeek}
Keywords: Algorithmic Trading; Reinforcement Learning; Large Language Models; Trading Agents; Machine Learning (search for similar items in EconPapers)
Date: 2025-02-10
Note: View the original document on HAL open archive server: https://hal.science/hal-04934770v1
References: Add references at CitEc
Citations:
Downloads: (external link)
https://hal.science/hal-04934770v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:wpaper:hal-04934770
DOI: 10.24963/ijcai.2022/510}
Access Statistics for this paper
More papers in Working Papers from HAL
Bibliographic data for series maintained by CCSD ().