版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ New Orleans Comp Sci Dept 2000 Lakeshore DrMath 349 New Orleans LA 70122 USA Texas A&M Univ Kingsville Dept Elect Engn & Comp Sci 700 Univ Blvd Kingsville TX 78363 USA Univ Virginia Dept Comp Sci 85 Engineers Way Charlottesville VA 22904 USA
出 版 物:《ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA》 (ACM数据挖掘汇刊)
年 卷 期:2020年第14卷第1期
页 面:5-5页
核心收录:
学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:DTRA CNIMS [HDTRA1-11-D-0016-0001] DTRA [HDTRA1-111-0016] DTRA NSF NetSE Grant [CNS-1011769] NSF SDCI [OCI-1032677] Louisiana Board of Regents RCS Grant [LEQSF(2017-20)-RDA-25]
主 题:Triangle-counting clustering-coefficient massive networks parallel algorithms social networks graph mining
摘 要:Big graphs (networks) arising in numerous application areas pose significant challenges for graph analysts as these graphs grow to billions of nodes and edges and are prohibitively large to fit in the main memory. Finding the number of triangles in a graph is an important problem in the mining and analysis of graphs. In this article, we present two efficient MPI-based distributed memory parallel algorithms for counting triangles in big graphs. The first algorithm employs overlapping partitioning and efficient load balancing schemes to provide a very fast parallel algorithm. The algorithm scales well to networks with billions of nodes and can compute the exact number of triangles in a network with 10 billion edges in 16 minutes. The second algorithm divides the network into non-overlapping partitions leading to a space-efficient algorithm. Our results on both artificial and real-world networks demonstrate a significant space saving with this algorithm. We also present a novel approach that reduces communication cost drastically leading the algorithm to both a space- and runtime-efficient algorithm. Further, we demonstrate how our algorithms can be used to list all triangles in a graph and compute clustering coefficients of nodes. Our algorithm can also be adapted to a parallel approximation algorithm using an edge sparsification method.