EconPapers    
Economics at your fingertips  
 

Quantitative Trading using Deep Q Learning

Soumyadip Sarkar

Papers from arXiv.org

Abstract: Reinforcement learning (RL) is a subfield of machine learning that has been used in many fields, such as robotics, gaming, and autonomous systems. There has been growing interest in using RL for quantitative trading, where the goal is to make trades that generate profits in financial markets. This paper presents the use of RL for quantitative trading and reports a case study based on an RL-based trading algorithm. The results show that RL can be a useful tool for quantitative trading and can perform better than traditional trading algorithms. The use of reinforcement learning for quantitative trading is a promising area of research that can help develop more sophisticated and efficient trading systems. Future research can explore the use of other reinforcement learning techniques, the use of other data sources, and the testing of the system on a range of asset classes. Together, our work shows the potential in the use of reinforcement learning for quantitative trading and the need for further research and development in this area. By developing the sophistication and efficiency of trading systems, it may be possible to make financial markets more efficient and generate higher returns for investors.

Date: 2023-04, Revised 2025-02
New Economics Papers: this item is included in nep-big, nep-cmp and nep-mst
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://arxiv.org/pdf/2304.06037 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2304.06037

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators (help@arxiv.org).

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2304.06037