Optimal Management for EV Charging Stations: A Win–Win Strategy for Different Stakeholders Using Constrained Deep Q-Learning
Athanasios Paraskevas,
Dimitrios Aletras,
Antonios Chrysopoulos,
Antonios Marinopoulos and
Dimitrios I. Doukas
Additional contact information
Athanasios Paraskevas: NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece
Dimitrios Aletras: NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece
Antonios Chrysopoulos: NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece
Antonios Marinopoulos: European Climate, Infrastructure and Environment Executive Agency (CINEA), European Commission, B-1049 Brussels, Belgium
Dimitrios I. Doukas: NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece
Energies, 2022, vol. 15, issue 7, 1-24
Abstract:
Given the additional awareness of the increasing energy demand and gas emissions’ effects, the decarbonization of the transportation sector is of great significance. In particular, the adoption of electric vehicles (EVs) seems a promising option, under the condition that public charging infrastructure is available. However, devising a pricing and scheduling strategy for public EV charging stations is a non-trivial albeit important task. The reason is that a sub-optimal decision could lead to high waiting times or extreme changes to the power load profile. In addition, in the context of the problem of optimal pricing and scheduling for EV charging stations, the interests of different stakeholders ought to be taken into account (such as those of the station owner and the EV owners). This work proposes a deep reinforcement learning-based (DRL) agent that can optimize pricing and charging control in a public EV charging station under a real-time varying electricity price. The primary goal is to maximize the station’s profits while simultaneously ensuring that the customers’ charging demands are also satisfied. Moreover, the DRL approach is data-driven; it can operate under uncertainties without requiring explicit models of the environment. Variants of scheduling and DRL training algorithms from the literature are also proposed to ensure that both the conflicting objectives are achieved. Experimental results validate the effectiveness of the proposed approach.
Keywords: dynamic pricing; EV charging station; pricing and scheduling; reinforcement learning; deep Q-learning; demand response (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
https://www.mdpi.com/1996-1073/15/7/2323/pdf (application/pdf)
https://www.mdpi.com/1996-1073/15/7/2323/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:15:y:2022:i:7:p:2323-:d:777424
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().