EconPapers    
Economics at your fingertips  
 

On-line reinforcement learning for optimization of real-life energy trading strategy

{\L}ukasz Lepak and Pawe{\l} Wawrzy\'nski

Papers from arXiv.org

Abstract: An increasing share of energy is produced from renewable sources by many small producers. The efficiency of those sources is volatile and, to some extent, random, exacerbating the problem of energy market balancing. In many countries, this balancing is done on the day-ahead (DA) energy markets. This paper considers automated trading on the DA energy market by a medium-sized prosumer. We model this activity as a Markov Decision Process and formalize a framework in which an applicable in real-life strategy can be optimized with off-line data. We design a trading strategy that is fed with the available environmental information that can impact future prices, including weather forecasts. We use state-of-the-art reinforcement learning (RL) algorithms to optimize this strategy. For comparison, we also synthesize simple parametric trading strategies and optimize them with an evolutionary algorithm. Results show that our RL-based strategy generates the highest market profits.

Date: 2023-03, Revised 2024-02
New Economics Papers: this item is included in nep-big, nep-cmp and nep-ene
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2303.16266 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2303.16266

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2303.16266