咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Wide and Deep Graph Neural Net... 收藏

Wide and Deep Graph Neural Network With Distributed Online Learning

作     者:Gao, Zhan Gama, Fernando Ribeiro, Alejandro 

作者机构:Univ Cambridge Dept Comp Sci Technol Cambridge CB3 0FD England Univ Calif Berkeley Dept Elect Engn & Comp Sci Berkeley CA 94709 USA Univ Penn Dept Elect & Syst Engn Philadelphia PA 19104 USA 

出 版 物:《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 (IEEE Trans Signal Process)

年 卷 期:2022年第70卷

页      面:3862-3877页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 

基  金:NSF CCF ARO [W911NF1710438] ARL DCIST CRA [W911NF-17-2-0181] Division of Computing and Communication Foundations Direct For Computer & Info Scie & Enginr Funding Source: National Science Foundation 

主  题:Robot kinematics Signal processing algorithms Convergence Heuristic algorithms Graph neural networks Testing Training Graph neural networks distributed learning online learning stability analysis convergence analysis 

摘      要:Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data. This renders them suitable candidates for decentralized tasks. In these scenarios, the underlying graph often changes with time due to link failures or topology variations, creating a mismatch between the graphs on which GNNs were trained and the ones on which they are tested. Online learning can be leveraged to retrain GNNs at testing time to overcome this issue. However, most online algorithms are centralized and usually offer guarantees only on convex problems, which GNNs rarely lead to. This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms. The WD-GNN consists of two components: the wide part is a linear graph filter and the deep part is a nonlinear GNN. At training time, the joint wide and deep architecture learns nonlinear representations from data. At testing time, the wide, linear part is retrained, while the deep, nonlinear one remains fixed. This often leads to a convex formulation. We further propose a distributed online learning algorithm that can be implemented in a decentralized setting. We also show the stability of the WD-GNN to changes of the underlying graph and analyze the convergence of the proposed online learning procedure. Experiments on movie recommendation, source localization and robot swarm control corroborate theoretical findings and show the potential of the WD-GNN for distributed online learning.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分