EconPapers    
Economics at your fingertips  
 

Deep reinforcement learning-based plug-in electric vehicle charging/discharging scheduling in a home energy management system

Shaza H. Mansour, Sarah M. Azzam, Hany M. Hasanien, Marcos Tostado-Véliz, Abdulaziz Alkuhayli and Francisco Jurado

Energy, 2025, vol. 316, issue C

Abstract: With the emergence of plug-in electric vehicles (PEVs) in smart grids (SGs) that helps in SG decarbonization, it has become crucial to harness these PEVs by optimizing their charging and discharging schedules in a smart home setting. However, uncertainties in arrival time, departure time, and state of charge (SOC) make scheduling tasks challenging. This paper proposes a two-stage approach whose objectives are minimizing both the PEV charging cost and the electricity bill of the smart home. In the first level, a deep reinforcement learning (DRL) method, a soft actor-critic (SAC)-based algorithm, is presented for a smart home PEV charging/discharging (C/D) scheduling under real-time pricing (RTP) tariff. In the second level, the obtained schedule is provided as an input to a home energy management system (HEMS) problem. The HEMS problem is formulated as a mixed integer linear programming (MILP) problem for scheduling home appliances and a battery energy storage system (BESS). SAC is compared to different reinforcement learning (RL) algorithms and disorderly PEV C/D for four samples from different seasons. The results show that SAC achieves the highest average rewards, lowest charging cost, and reaches the required SOC upon departure. Compared with other RL algorithms, SAC can achieve a PEV charging cost saving of up to 51.45 % during the summer season. It is also illustrated that SAC causes a huge cost reduction compared to the disorderly C/D schedule. The HEMS appliances and BESS schedules with the SAC-scheduled PEV are shown for four samples. These schedules are compared with the HEMS schedules with disorderly PEV scheduling, and without PEVs. HEMS schedules with SAC-scheduled PEVs can save costs up to 83.29 %, and 15.69 % compared with HEMS with disorderly PEV scheduling, and to the HEMS schedule without PEVs, respectively.

Keywords: Deep reinforcement learning; Home energy management system; Plug-in electric vehicles; Soft actor-critic (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0360544225000623
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:energy:v:316:y:2025:i:c:s0360544225000623

DOI: 10.1016/j.energy.2025.134420

Access Statistics for this article

Energy is currently edited by Henrik Lund and Mark J. Kaiser

More articles in Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:energy:v:316:y:2025:i:c:s0360544225000623