With the rapid advancement of new power system construction, the uncertainty of power grid operation mode is increasing, the complexity of short-time optimization decision increases rapidly, and the type and number of...
详细信息
ISBN:
(纸本)9781665464215
With the rapid advancement of new power system construction, the uncertainty of power grid operation mode is increasing, the complexity of short-time optimization decision increases rapidly, and the type and number of dispatching objects are growing exponentially. current power grid dispatching schedules based on physical models have some problems, such as slow calculation speed, long time consumption and insufficient adaptability to cope with multiple uncertain scenes. In this study, we propose to use model-free deep reinforcement learning method to carry out research on look-ahead dispatching of power grids. Firstly, we describe the look-ahead dispatching model and establish a look-ahead economic dispatching model of power grids considering operational safety and operational efficiency, then a neural network is used to parametrically represent the policy of the power grid and the a2c algorithm is used to learn the parameterized policy. The proposed method is validated by using the IEEE 30 bus system with wind farms as an example.
Rapid advances in renewable energy technologies offer significant opportunities for the global energy transition and environmental protection. However, due to the fluctuating and intermittent nature of their power gen...
详细信息
Rapid advances in renewable energy technologies offer significant opportunities for the global energy transition and environmental protection. However, due to the fluctuating and intermittent nature of their power generation, which leads to the phenomenon of power abandonment, it has become a key challenge to efficiently consume renewable energy sources and guarantee the reliable operation of the power system. In order to address the above problems, this paper proposes an electric vehicle aggregator (EVA) scheduling strategy based on a two-layer game by constructing a two-layer game model between renewable energy generators (REG) and EVA, where the REG formulates time-sharing tariff strategies in the upper layer to guide the charging and discharging behaviors of electric vehicles, and the EVA respond to the price signals in the lower layer to optimize the large-scale electric vehicle scheduling. For the complexity of large-scale scheduling, this paper introduces the a2c (Advantage Actor-critic) reinforcement learning algorithm, which combines the value network and the strategy network synergistically to optimize the real-time scheduling process. Based on the case study of wind power, photovoltaic, and wind-solar complementary data in Jilin Province, the results show that the strategy significantly improves the rate of renewable energy consumption (up to 97.88%) and reduces the cost of power purchase by EVA (an average saving of RMB 0.04/kWh), realizing a win-win situation for all parties. The study provides theoretical support for the synergistic optimization of the power system and renewable energy and is of great practical significance for the large-scale application of electric vehicles and new energy consumption.
Facilities Layout Problems (FLPs) aim to efficiently allocate facilities within a given space, considering various constraints such as minimizing transportation distances. These problems are commonly encountered in va...
详细信息
Facilities Layout Problems (FLPs) aim to efficiently allocate facilities within a given space, considering various constraints such as minimizing transportation distances. These problems are commonly encountered in various types of advanced manufacturing systems, including Reconfigurable Manufacturing Systems (RMSs). RMSs enable easier layout changes to accommodate shifts in product mix, production volume, or process requirements thanks to their modularity and changeability. Reinforcement Learning (RL) has proven its efficiency in addressing decision-making problems. Therefore, this paper introduces a comparative study between two RL algorithms to solve FLPs: Advantage Actor-critic (a2c) and Q-learning algorithms. copyright (c) 2024 The Authors. This is an open access article under the cc BY-Nc-ND license (https://***/licenses/by-nc-nd/4.0/)
In this paper, the problem of congestion control is studied for transmission control protocol (TcP) in an unmanned aerial vehicles (UAVs) assisted wireless network (UAWN). In the studied model, transmitters transmit d...
详细信息
Facilities Layout Problems (FLPs) aim to efficiently allocate facilities within a given space, considering various constraints such as minimizing transportation distances. These problems are commonly encountered in va...
详细信息
Facilities Layout Problems (FLPs) aim to efficiently allocate facilities within a given space, considering various constraints such as minimizing transportation distances. These problems are commonly encountered in various types of advanced manufacturing systems, including Reconfigurable Manufacturing Systems (RMSs). RMSs enable easier layout changes to accommodate shifts in product mix, production volume, or process requirements thanks to their modularity and changeability. Reinforcement Learning (RL) has proven its efficiency in addressing decision-making problems. Therefore, this paper introduces a comparative study between two RL algorithms to solve FLPs: Advantage Actor-critic (a2c) and Q-learning algorithms.
暂无评论