Optimal control of a wind farm in time-varying wind using deep reinforcement learning
Taewan Kim,
Changwook Kim,
Jeonghwan Song and
Donghyun You
Energy, 2024, vol. 303, issue C
Abstract:
A deep-reinforcement-learning (DRL) based control method to take the advantage of complex wake interactions in a wind farm is developed. Although the wind over a wind farm is changing, steady wind has been assumed in the most conventional methods for wind farm control. Under unsteady wind, the generated power of a wind farm becomes stochastic due to intermittent and fluctuating wind. To tackle the difficulty, a DRL-based method with which the pitch and yaw angles of wind turbines in a wind farm are strategically controlled is developed. Time-histories of the past wind and the predicted future wind are both utilized to identify the relation between the generated power and control. The present neural network is trained and validated using an experimental wind farm. A multi-fan wind tunnel is developed to generate unsteady wind for experiments with miniature wind farms, where the improvement in the generated power by the present DRL-based control method is demonstrated.
Keywords: Wind farm control; Active yaw control; Axial induction control; Deep reinforcement learning (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0360544224017237
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:energy:v:303:y:2024:i:c:s0360544224017237
DOI: 10.1016/j.energy.2024.131950
Access Statistics for this article
Energy is currently edited by Henrik Lund and Mark J. Kaiser
More articles in Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().