University of Warwick
Assembly Line
🧠Data-Driven Wind Farm Control via Multiplayer Deep Reinforcement Learning
This brief proposes a novel data-driven control scheme to maximize the total power output of wind farms subject to strong aerodynamic interactions among wind turbines. The proposed method is model-free and has strong robustness, adaptability, and applicability. Particularly, distinct from the state-of-the-art data-driven wind farm control methods that commonly use the steady-state or time-averaged data (such as turbines’ power outputs under steady wind conditions or from steady-state models) to carry out learning, the proposed method directly mines in-depth the time-series data measured at turbine rotors under time-varying wind conditions to achieve farm-level power maximization. The control scheme is built on a novel multiplayer deep reinforcement learning method (MPDRL), in which a special critic–actor–distractor structure, along with deep neural networks (DNNs), is designed to handle the stochastic feature of wind speeds and learn optimal control policies subject to a user-defined performance metric. The effectiveness, robustness, and scalability of the proposed MPDRL-based wind farm control method are tested by prototypical case studies with a dynamic wind farm simulator (WFSim). Compared with the commonly used greedy strategy, the proposed method leads to clear increases in farm-level power generation in case studies.