版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Idaho Natl Lab Nucl Sci & Technol Idaho Falls ID 83415 USA Univ Southern Calif Sonny Astani Dept Civil & Environm Engn Los Angeles CA 90089 USA Idaho Natl Lab Natl & Homeland Secur Idaho Falls ID 83415 USA Idaho Natl Lab Energy & Environm Sci & Technol Idaho Falls ID 83415 USA Idaho State Univ Civil & Environm Engn Pocatello ID 83209 USA
出 版 物:《ENERGY AND AI》 (Energy. AI.)
年 卷 期:2025年第19卷
核心收录:
基 金:INL Laboratory Directed Research & Development (LDRD) Program under DOE Idaho Operations Office [DE-AC07-05ID14517] U.S. Department of Energy's Office of Nuclear Energy and the Nuclear Science User Facilities [DE-AC07-05ID14517]
主 题:Complex systems Graph neural networks Power grids Grid reliability Generalizable models
摘 要:Although machine learning (ML) has emerged as a powerful tool for rapidly assessing grid contingencies, prior studies have largely considered a static grid topology in their analyses. This limits their application, since they need to be re-trained for every new topology. This paper explores the development of generalizable graph convolutional network (GCN) models by pre-training them across a wide range of grid topologies and contingency types. We found that a GCN model with auto-regressive moving average (ARMA) layers with a line graph representation of the grid offered the best predictive performance in predicting voltage magnitudes (VM) and voltage angles (VA). We introduced the concept of phantom nodes to consider disparate grid topologies with a varying number of nodes and lines. For pre-training the GCN ARMA model across a variety of topologies, distributed graphics processing unit (GPU) computing afforded us significant training scalability. The predictive performance of this model on grid topologies that were part of the training data is substantially better than the direct current (DC) approximation. Although direct application of the pre-trained model to topologies that are not part of the grid is not particularly satisfactory, fine-tuning with small amounts of data from a specific topology of interest significantly improves predictive performance. In the context of foundational models in ML, this paper highlights the feasibility of training large-scale GNN models to assess the reliability of power grids by considering a wide variety of grid topologies and contingency types.