A novel multi-objective optimization based multi-agent deep reinforcement learning approach for microgrid resources planning
Md. Shadman Abid,
Hasan Jamil Apon,
Salman Hossain,
Ashik Ahmed,
Razzaqul Ahshan and
M.S. Hossain Lipu
Applied Energy, 2024, vol. 353, issue PA, No S0306261923013934
Abstract:
Multi-agent deep reinforcement learning (MADRL) approaches are at the forefront of contemporary research in optimum electric vehicle (EV) charging scheduling challenges. These techniques involve multiple agents that respond to a dynamic simulation environment to strategically integrate EV charging stations (EVCSs) on microgrids by incorporating the constraints posed by stochastic trip durations. In addition, recent research works have demonstrated that planning frameworks based on multi-objective optimization (MOO) techniques are suitable for the efficient functioning of microgrids comprising renewable energy sources (RESs) and battery energy storage systems (BESSs). Even though MADRL techniques have been used to solve the optimum EV charging scheduling challenges and MOO frameworks have been developed to determine the optimal RES-BESS allocation, the potential of merging MADRL and MOO is yet to be explored. Therefore, this research provides an opportunity to determine the effectiveness of combined MOO-MADRL dynamics and their computational efficacy. In this context, this work presents a novel Multi-objective Artificial Vultures Optimization Algorithm based on Multi-agent Deep Deterministic Policy Gradient (MOAVOA-MADDPG) planning framework for allocating RESs, BESSs, and EVCSs on microgrids. The objective function is formulated to optimize the network power losses, total installation and operational costs, greenhouse gas emissions, and system voltage stability. Moreover, the proposed framework incorporates the sporadic nature of RES systems and intends to improve the state of charge (SOC) of the EVs present in the network. The presented approach is validated using practical weather data and EV commuting behavior on the modified IEEE 33 bus network, two practical distribution feeders in Bangladesh, and the Turkish 141 bus network. According to the findings, the MOAVOA-MADDPG framework effectively accommodated the financial, technical, and environmental considerations with improved average SOC of the vehicles. Furthermore, statistical analysis, spacing, convergence, and hyper-volume metrics are employed to compare the suggested MOAVOA-MADDPG framework with five contemporary techniques. The findings indicate that, in every metric considered, the MOAVOA-MADDPG Pareto fronts provide superior solutions.
Keywords: Reinforcement learning; Microgrid; Deep learning; Optimization; Electric vehicle (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261923013934
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:353:y:2024:i:pa:s0306261923013934
Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic
DOI: 10.1016/j.apenergy.2023.122029
Access Statistics for this article
Applied Energy is currently edited by J. Yan
More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu (repec@elsevier.com).