版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:The School of Electronic Engineering Key Laboratory of Intelligent Perception and Image Understanding Ministry of Education Xidian University Shaanxi Province Xi’an China The School of Computer Science and Technology Xidian University Shaanxi Province Xi’an China
出 版 物:《arXiv》 (arXiv)
年 卷 期:2021年
核心收录:
摘 要:Recently, maximizing mutual information has emerged as a powerful method for unsupervised graph representation learning. The existing methods are typically effective to capture graph information from the topology view but consistently ignore the feature view. To circumvent this issue, we propose a novel approach by exploiting mutual information maximization across feature and topology views. Specifically, we first utilize a multi-view representation learning module to better capture both local and global information content across feature and topology views on graphs. To model the information shared by the feature and topology spaces, we then develop a common representation learning module via using mutual information maximization and reconstruction loss minimization. Here, minimizing reconstruction loss forces the model to learn the shared information of feature and topology spaces. To explicitly encourage diversity between graph representations from the same view, we also introduce a disagreement regularization to enlarge the distance between representations from the same view. Experiments on synthetic and real-world datasets demonstrate the effectiveness of integrating feature and topology views. In particular, compared with the previous supervised methods, our proposed method can achieve comparable or even better performance under the unsupervised representation and linear evaluation protocol. Copyright © 2021, The Authors. All rights reserved.