A Deep Reinforcement Learning Approach to DC-DC Power Electronic Converter Control with Practical Considerations
Nafiseh Mazaheri (),
Daniel Santamargarita,
Emilio Bueno,
Daniel Pizarro and
Santiago Cobreces
Additional contact information
Nafiseh Mazaheri: Department of Electronics, Alcalá University (UAH), Plaza San Diego S/N, 28801 Madrid, Spain
Daniel Santamargarita: Department of Electronics, Alcalá University (UAH), Plaza San Diego S/N, 28801 Madrid, Spain
Emilio Bueno: Department of Electronics, Alcalá University (UAH), Plaza San Diego S/N, 28801 Madrid, Spain
Daniel Pizarro: Department of Electronics, Alcalá University (UAH), Plaza San Diego S/N, 28801 Madrid, Spain
Santiago Cobreces: Department of Electronics, Alcalá University (UAH), Plaza San Diego S/N, 28801 Madrid, Spain
Energies, 2024, vol. 17, issue 14, 1-22
Abstract:
In recent years, there has been a growing interest in using model-free deep reinforcement learning (DRL)-based controllers as an alternative approach to improve the dynamic behavior, efficiency, and other aspects of DC–DC power electronic converters, which are traditionally controlled based on small signal models. These conventional controllers often fail to self-adapt to various uncertainties and disturbances. This paper presents a design methodology using proximal policy optimization (PPO), a widely recognized and efficient DRL algorithm, to make near-optimal decisions for real buck converters operating in both continuous conduction mode (CCM) and discontinuous conduction mode (DCM) while handling resistive and inductive loads. Challenges associated with delays in real-time systems are identified. Key innovations include a chattering-reduction reward function, engineering of input features, and optimization of neural network architecture, which improve voltage regulation, ensure smoother operation, and optimize the computational cost of the neural network. The experimental and simulation results demonstrate the robustness and efficiency of the controller in real scenarios. The findings are believed to make significant contributions to the application of DRL controllers in real-time scenarios, providing guidelines and a starting point for designing controllers using the same method in this or other power electronic converter topologies.
Keywords: deep reinforcement learning; proximal policy optimization; power electronic converters; buck converter (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/17/14/3578/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/14/3578/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:14:p:3578-:d:1439586
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().