版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Harbin Inst Technol Shenzhen Key Lab Internet Informat Collaborat Shenzhen Grad Sch Shenzhen Peoples R China
出 版 物:《NEURAL COMPUTING & APPLICATIONS》 (神经网络计算与应用)
年 卷 期:2020年第32卷第9期
页 面:4835-4847页
核心收录:
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China, NSFC, (61572158, 61602132) National Natural Science Foundation of China, NSFC Science, Technology and Innovation Commission of Shenzhen Municipality, (JCYJ20160330163900579, JCYJ20170413105929681, JCYJ20170811160212033) Science, Technology and Innovation Commission of Shenzhen Municipality
主 题:Attribute network Embedding Variational autoencoder
摘 要:Network embedding aims to learn low-dimensional representations for nodes in social networks, which can serve many applications, such as node classification, link prediction and visualization. Most of network embedding methods focus on learning the representations solely from the topological structure. Recently, attributed network embedding, which utilizes both the topological structure and node content to jointly learn latent representations, becomes a hot topic. However, previous studies obtain the joint representations by directly concatenating the one from each aspect, which may lose the correlations between the topological structure and node content. In this paper, we propose a new attributed network embedding method, TLVANE, which can address the drawback by exploiting the deep variational autoencoders (VAEs). Particularly, a two-level VAE model is built, where the first-level accounts for the joint representations while the second for the embeddings of each aspect. Extensive experiments on three real-world datasets have been conducted, and the results demonstrate the superiority of the proposed method against state-of-the-art competitors.