We present the firstm polylog(n) work, polylog(n) time algorithm in the PRAM model that computes (1 + epsilon)-approximate single-source shortest paths on weighted, undirected graphs. This improves upon the breakthrou...
详细信息
ISBN:
(纸本)9781450369794
We present the firstm polylog(n) work, polylog(n) time algorithm in the PRAM model that computes (1 + epsilon)-approximate single-source shortest paths on weighted, undirected graphs. This improves upon the breakthrough result of Cohen [JACM'00] that achieves O(m(1+epsilon 0)) work and polylog(n) time. While most previous approaches, including Cohen's, leveraged the power of hopsets, our algorithm builds upon the recent developments in continuous optimization, studying the shortest path problem from the lens of the closely-related minimum transshipment problem. To obtain our algorithm, we demonstrate a series of near-linearwork, polylogarithmic-time reductions between the problems of approximate shortest path, approximate transshipment, and l(1)-embeddings, and establish a recursive algorithm that cycles through the three problems and reduces the graph size on each cycle. As a consequence, we also obtain faster parallel algorithms for approximate transshipment and l(1)-embeddings with polylogarithmic distortion. The minimum transshipment algorithm in particular improves upon the previous best m(1+o(1)) work sequential algorithm of Sherman [SODA'17]. To improve readability, the paper is almost entirely self-contained, save for several staple theorems in algorithms and combinatorics.
The Morse-Smale complex is a well studied topological structure that represents the gradient flow behavior of a scalar function. It supports multi-scale topological analysis and visualization of large scientific data....
详细信息
ISBN:
(纸本)9781728180144
The Morse-Smale complex is a well studied topological structure that represents the gradient flow behavior of a scalar function. It supports multi-scale topological analysis and visualization of large scientific data. Its computation poses significant algorithmic challenges when considering large scale data and increased feature complexity. Several parallel algorithms have been proposed towards the fast computation of the 3D Morse-Smale complex. The non-trivial structure of the saddle-saddle connections are not amenable to parallel computation. This paper describes a fine grained parallel method for computing the Morse-Smale complex that is implemented on a GPU. The saddle-saddle reachability is first determined via a transformation into a sequence of vector operations followed by the path traversal, which is achieved via a sequence of matrix operations. Computational experiments show that the method achieves up to 7 x speedup over current shared memory implementations.
With the arrival of the intermodality era, to design fast and efficient K shortest paths(KSP) algorithms becomes gradually one of the core technologies in traveler information systems. Yen is a classical algorithm for...
详细信息
ISBN:
(纸本)9783038350194
With the arrival of the intermodality era, to design fast and efficient K shortest paths(KSP) algorithms becomes gradually one of the core technologies in traveler information systems. Yen is a classical algorithm for KSP. However, Yen is time-consuming. In view of powerful general-purpose computing capabilities, GPU(Graphics Processing Units) has received increasing attention in various fields. Based on CUDA software development environment, combined with the structure of the Yen algorithm itself, this paper proposed two parallel algorithms for Yen. The first parallel algorithm computes candidate shortest paths for very possible deviation nodes in parallel. The second one computes candidate shortest paths in serial, but computes very candidate path in parallel. Finally, the efficiency of the two parallel algorithms was tested through experiments.
The DBSCAN method for spatial clustering has received significant attention due to its applicability in a variety of data analysis tasks. There are fast sequential algorithms for DBSCAN in Euclidean space that take O(...
详细信息
ISBN:
(纸本)9781450367356
The DBSCAN method for spatial clustering has received significant attention due to its applicability in a variety of data analysis tasks. There are fast sequential algorithms for DBSCAN in Euclidean space that take O(n logn) work for two dimensions, sub-quadratic work for three or more dimensions, and can be computed approximately in linear work for any constant number of dimensions. However, existing parallel DBSCAN algorithms require quadratic work in the worst case. This paper bridges the gap between theory and practice of parallel DBSCAN by presenting new parallel algorithms for Euclidean exact DBSCAN and approximate DBSCAN that match the work bounds of their sequential counterparts, and are highly parallel (polylogarithmic depth). We present implementations of our algorithms along with optimizations that improve their practical performance. We perform a comprehensive experimental evaluation of our algorithms on a variety of datasets and parameter settings. Our experiments on a 36-core machine with two-way hyper-threading show that our implementations outperform existing parallel implementations by up to several orders of magnitude, and achieve speedups of up to 33x over the best sequential algorithms.
We consider networks of small, autonomous devices that communicate with each other wirelessly. Minimizing energy usage is an important consideration in designing algorithms for such networks, as battery life is a cruc...
详细信息
This paper presents multilevel iterative schemes for solving the multigroup Boltzmann transport equations (BTEs) with parallel calculation of group equations. They are formulated with multigroup and grey low-order equ...
详细信息
In this work, we give a new parallel algorithm for the problem of maximizing a non-monotone diminishing returns submodular function subject to a cardinality constraint. For any desired accuracy epsilon, our algorithm ...
详细信息
ISBN:
(纸本)9781713821120
In this work, we give a new parallel algorithm for the problem of maximizing a non-monotone diminishing returns submodular function subject to a cardinality constraint. For any desired accuracy epsilon, our algorithm achieves a 1/epsilon - epsilon approximation using O(log n log(1/epsilon)epsilon(3)) parallel rounds of function evaluations. The approximation guarantee nearly matches the best approximation guarantee known for the problem in the sequential setting and the number of parallel rounds is nearly-optimal for any constant epsilon. Previous algorithms achieve worse approximation guarantees using Omega(log(2) n) parallel rounds. Our experimental evaluation suggests that our algorithm obtains solutions whose objective value nearly matches the value obtained by the state of the art sequential algorithms, and it outperforms previous parallel algorithms in number of parallel rounds, iterations, and solution quality.
We present an algorithm for constructing a depth-first search tree in planar digraphs;the algorithm can be implemented in the complexity class AC1(UL ∩ co-UL), which is contained in AC2. Prior to this (for more than ...
详细信息
Data partitioning algorithms aiming to minimize the execution time and the energy of computations in self-adaptable data-parallel applications on modern extreme-scale multicore platforms must address two critical chal...
详细信息
Data partitioning algorithms aiming to minimize the execution time and the energy of computations in self-adaptable data-parallel applications on modern extreme-scale multicore platforms must address two critical challenges. First, they must take into account the new complexities inherent in these platforms such as severe resource contention and non-uniform memory access. Second, they must have low practical runtime and memory costs. The sequential data partitioning algorithms addressing the first challenge have a theoretical time complexity of O(m*m*p*p) where m is the number of points in the discrete speed/energy function and p is the number of available processors. They, however, exhibit high practical runtime cost and excessive memory footprint, therefore, rendering them impracticable for employment in self-adaptable applications executing on extreme-scale multicore platforms. We present, in this paper, the parallel data partitioning algorithms that address both the challenges. They take as input the functional models of performance and energy consumption against problem size and output workload distributions, which are globally optimal solutions. They have a low time complexity of O(m*m*p) thereby providing a linear speedup of O(p) and low memory complexity of O(n) where n is the workload size expressed as a multiple of granularity. They employ dynamic programming approach, which also facilitates the easier integration of performance and energy models of communications. We experimentally study the practical cost of application of our algorithms in two data-parallel applications, matrix multiplication and fast Fourier transform, on a cluster in Grid'5000 platform. We demonstrate that their practical runtime and memory costs are low making them ideal for employment in self-adaptable applications. We also show that the parallel algorithms exhibit tremendous speedups over the sequential algorithms. Finally, using theoretical analysis for a forecast exascale platfor
We develop the first distributed -memory parallel implementation of Symmetric Nonnegative Matrix Factorization (SymNMF), a key data analytics kernel 14 clustering and dimensionality reduction. Our implementation inclu...
详细信息
ISBN:
(纸本)9781728199986
We develop the first distributed -memory parallel implementation of Symmetric Nonnegative Matrix Factorization (SymNMF), a key data analytics kernel 14 clustering and dimensionality reduction. Our implementation includes two different algorithms for SytnNMF, which give comparable results in terms of time and accuracy. The first algorithm is a parallelization of an existing sequential approach that uses solvers for nonsymmetric NNW The second algorithm is a novel approach based on the Gauss -Newton method. It exploits second -order information without incurring large computational and memory costs. We evaluate the scalability of our algorithms on the Summit system at Oak Ridge National Laboratory, scaling up to 128 nodes (4,096 cores) with 70% efficiency. Additionally, we demonstrate our software on an image segmentation task.
暂无评论