EconPapers    
Economics at your fingertips  
 

Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling

Christos D. Korkas (), Christos D. Tsaknakis, Athanasios Ch. Kapoutsis and Elias Kosmatopoulos
Additional contact information
Christos D. Korkas: Center for Research and Technology Hellas, Informatics & Telematics Institute (ITI-CERTH), 57001 Thessaloniki, Greece
Christos D. Tsaknakis: Center for Research and Technology Hellas, Informatics & Telematics Institute (ITI-CERTH), 57001 Thessaloniki, Greece
Athanasios Ch. Kapoutsis: Center for Research and Technology Hellas, Informatics & Telematics Institute (ITI-CERTH), 57001 Thessaloniki, Greece
Elias Kosmatopoulos: Center for Research and Technology Hellas, Informatics & Telematics Institute (ITI-CERTH), 57001 Thessaloniki, Greece

Energies, 2024, vol. 17, issue 15, 1-20

Abstract: The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.

Keywords: EV charging; energy scheduling; user preferences; smart grids; multi-agent reinforcement learning; distributed decision making (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1996-1073/17/15/3694/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/15/3694/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:15:p:3694-:d:1443723

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:17:y:2024:i:15:p:3694-:d:1443723