A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community dete...
详细信息
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incrementalalgorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incrementalalgorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. For a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incrementalalgorithms.
One of the biggest challenges in today's social network analysis (SNA) is handling dynamic data. Realworld social networks evolve with time, forcing their corresponding graph representations to dynamically update ...
详细信息
One of the biggest challenges in today's social network analysis (SNA) is handling dynamic data. Realworld social networks evolve with time, forcing their corresponding graph representations to dynamically update by addition or deletion of edges/nodes. Consequently, a researcher is often interested in fast recomputation of important SNA metrics pertaining to a network. Recomputations of SNA metrics are expensive. Use of dynamic algorithms has been found as a solution to this problem. For calculating closeness and betweenness centrality metrics, computations of all pairs shortest paths (APSP) are needed. Thus, to compute these SNA metrics dynamically, APSP are needed to be computed dynamically. This paper presents fast incremental updating algorithms along with the time complexity results for APSP, closeness centrality and betweenness centrality, considering two distinct cases: edge addition and node addition. The following time complexity results are presented: (1) The incremental APSP algorithm runs in O(n(2)) time (Omega(n(2))) is the theoretical lower bound of the APSP problem), (2) The incremental closeness algorithm that runs in O(n(2)) time, and (3) The incremental betweenness algorithm runs in O(nm + n(2) log n) time. Here, m is the number of edges and n is the number of nodes in the network. Though the time complexity of the presented incremental betweenness algorithm is no better than its static counterpart (Brandes, J Math Sociol 25(2): 163-177, 2001), the experimental comparisons demonstrate that the former performs better than the latter. All the presented methods are applicable to weighted, directed or undirected graphs with non-negative realvalued edge weights. An alternate version of the incremental APSP algorithm is presented in the Appendix section. It is demonstrated that this version works better on large graphs.
The main focus of this work is on the development of incrementalgraph-based tecnhiques for satisfying a coupled set of enginnering and geometric constraints. The objective is to speed up a constraint-based design. A ...
详细信息
ISBN:
(纸本)0818692162
The main focus of this work is on the development of incrementalgraph-based tecnhiques for satisfying a coupled set of enginnering and geometric constraints. The objective is to speed up a constraint-based design. A prototype constraint-based system has been developed to demonstrate the feasibility of these incrementalalgorithms. The proposed techniques were applied to the internal-combustion engine and the results are presented and discussed.
Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a dive...
详细信息
ISBN:
(纸本)9781450328739
Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable computations to respond automatically to changes in their data. To improve the scalability of self-adjusting computation, we present a type system for precise dependency tracking that minimizes the time and space for storing dependency metadata. The type system eliminates an important assumption of prior work that can lead to recording spurious dependencies. We present a type-directed translation algorithm that generates correct self-adjusting programs without relying on this assumption. We then show a probabilistic-chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation. We implement and evaluate these techniques, showing promising results on challenging benchmarks involving large graphs.
暂无评论