Learning-Based Model Predictive Control of DC-DC Buck Converters in DC Microgrids: A Multi-Agent Deep Reinforcement Learning Approach
Hoda Sorouri,
Arman Oshnoei,
Mateja Novak,
Frede Blaabjerg and
Amjad Anvari-Moghaddam
Additional contact information
Hoda Sorouri: Department of Energy (AAU Energy), Aalborg University, 9220 Aalborg, Denmark
Arman Oshnoei: Department of Energy (AAU Energy), Aalborg University, 9220 Aalborg, Denmark
Mateja Novak: Department of Energy (AAU Energy), Aalborg University, 9220 Aalborg, Denmark
Frede Blaabjerg: Department of Energy (AAU Energy), Aalborg University, 9220 Aalborg, Denmark
Amjad Anvari-Moghaddam: Department of Energy (AAU Energy), Aalborg University, 9220 Aalborg, Denmark
Energies, 2022, vol. 15, issue 15, 1-21
Abstract:
This paper proposes a learning-based finite control set model predictive control (FCS-MPC) to improve the performance of DC-DC buck converters interfaced with constant power loads in a DC microgrid (DC-MG). An approach based on deep reinforcement learning (DRL) is presented to address one of the ongoing challenges in FCS-MPC of the converters, i.e., optimal design of the weighting coefficients appearing in the FCS-MPC objective function for each converter. A deep deterministic policy gradient method is employed to learn the optimal weighting coefficient design policy. A Markov decision method formulates the DRL problem. The DRL agent is trained for each converter in the MG, and the weighting coefficients are obtained based on reward computation with the interactions between the MG and agent. The proposed strategy is wholly distributed, wherein agents exchange data with other agents, implying a multi-agent DRL problem. The proposed control scheme offers several advantages, including preventing the dependency of the converter control system on the operating point conditions, plug-and-play capability, and robustness against the MG uncertainties and unknown load dynamics.
Keywords: DC microgrid; finite set model predictive control; dc-dc buck converter; deep reinforcement learning; constant power load (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/1996-1073/15/15/5399/pdf (application/pdf)
https://www.mdpi.com/1996-1073/15/15/5399/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:15:y:2022:i:15:p:5399-:d:872100
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().