One of the simplest problems on directed graphs is that of identifying the set of vertices reachable from a designated source vertex. This problem can be solved easily sequentially by performing a graph search, but ef...
详细信息
ISBN:
(纸本)9781450355599
One of the simplest problems on directed graphs is that of identifying the set of vertices reachable from a designated source vertex. This problem can be solved easily sequentially by performing a graph search, but efficient parallel algorithms have eluded researchers for decades. For sparse high-diameter graphs in particular, there is no known work-efficient parallel algorithm with nontrivial parallelism. This amounts to one of the most fundamental open questions in parallel graph algorithms: Is there a parallel algorithm for digraph reachability with nearly linear work? This paper shows that the answer is yes. This paper presents a randomized parallel algorithm for digraph reachability and related problems with expected work (O) over tilde (m) and span (Omega) over tilde (n(2/3)), and hence parallelism (Omega) over tilde (m/n(2/3)) = (Omega) over tilde (n(1/3)), on any graph with n vertices and m arcs. This is the first parallel algorithm having both nearly linear work and strongly sublinear span, i.e., span (Omega) over tilde (n(1-epsilon)) for any constant epsilon > 0. The algorithm can be extended to produce a directed spanning tree, determine whether the graph is acyclic, topologically sort the strongly connected components of the graph, or produce a directed ear decomposition, all with work (O) over tilde (m) and span (O) over tilde (n(2/3)). The main technical contribution is an efficient Monte Carlo algorithm that, through the addition of (O) over tilde (n) shortcuts, reduces the diameter of the graph to (Omega) over tilde (n(2/3)) with high probability. While both sequential and parallel algorithms are known with those combinatorial properties, even the sequential algorithms are not efficient, having sequential runtime (Omega) over tilde (m (1)). This paper presents a surprisingly simple sequential algorithm that achieves the stated diameter reduction and runs in O(m) time. Parallelizing that algorithm yields the main result, but doing so involves overcomi
In this paper, we show that the time complexity of monotone minplus product of two n x n matrices is (O) over tilde (n((3)(+omega)()/2)) = (O) over tilde (n(2.687)), where omega (n((3)(+omega)()/2)) time, which greatl...
详细信息
ISBN:
(纸本)9781450392648
In this paper, we show that the time complexity of monotone minplus product of two n x n matrices is (O) over tilde (n((3)(+omega)()/2)) = (O) over tilde (n(2.687)), where omega < 2.373 is the fast matrix multiplication exponent [Alman and Vassilevska Williams 2021]. That is, when A is an arbitrary integer matrix and B is either row-monotone or columnmonotone with integer elements bounded by O(n), computing the min-plus product C where C-i,C-j = min(k){A(i,k) + B-k,B-j} takes <(O)over tilde>(n((3)(+omega)()/2)) time, which greatly improves the previous time bound of (O) over tilde (n((12+omega)/5)) = (O) over tilde (n(2.875)) [Gu, Polak, Vassilevska Williams and Xu 2021]. Then by simple reductions, this means the case that A is arbitrary and the columns or rows of B are bounded-difference can also be solved in (O) over tilde (n((3)(+omega)()/2)) time, whose previous result gives time complexity of (O) over tilde (n(2.922)) [Bringmann, Grandoni, Saha and Vassilevska Williams 2016]. So the case that both of A and B are bounded-difference also has (O) over tilde (n((3)(+omega)()/2)) time algorithm, whose previous results give time complexities of (O) over tilde (n(2.824)) [Bringmann, Grandoni, Saha and Vassilevska Williams 2016] and (O) over tilde (n(2.779)) [Chi, Duan and Xie 2022]. Many problems are reducible to these problems, such as language edit distance, RNA-folding, scored parsing problem on BD grammars [Bringmann, Grandoni, Saha and Vassilevska Williams 2016]. Thus, their complexities are all improved. Finally, we also consider the problem of min-plus convolution between two integral sequences which are monotone and bounded by O(n), and achieve a running time upper bound of (O) over tilde (n(1.5)). Previously, this task requires running time (O) over tilde (n((9+root 177)/12)) = O(n(1.859)) [Chan and Lewenstein 2015].
This paper presents a new approximate algorithm Nested Queue-Jumping algorithm (NQJA) to solve traveling salesman problem. The proposed algorithm incorporates the thoughts of heuristic algorithm, randomized algorithm ...
详细信息
ISBN:
(纸本)9539676967
This paper presents a new approximate algorithm Nested Queue-Jumping algorithm (NQJA) to solve traveling salesman problem. The proposed algorithm incorporates the thoughts of heuristic algorithm, randomized algorithm and local optimization. Numerical results show that to the small-scale instances, using Queue-Jumping algorithm (QJA) directly can obtain the known optimal solution with a large probability. In the case of large-scale instances, NQJA generates high-quality solution compared to well know heuristic methods. Moreover, the shortest tour to China 144 TSP found by NQJA is shorter than the known optimal tour. It can be a very promising alternative for finding a solution to the TSP. NQJA is specially devised for TSP, But its thought can give elicitation for other NP-hard combinatorial optimization problems.
Betweenness centrality measures the importance of a vertex by quantifying the number of times it acts as a midpoint of the shortest paths between other vertices. This measure is widely used in network analysis. In man...
详细信息
ISBN:
(纸本)9781450329569
Betweenness centrality measures the importance of a vertex by quantifying the number of times it acts as a midpoint of the shortest paths between other vertices. This measure is widely used in network analysis. In many applications, we wish to choose the k vertices with the maximum adaptive betweenness centrality, which is the betweenness centrality without considering the shortest paths that have been taken into account by already-chosen vertices. All previous methods are designed to compute the betweenness centrality in a fixed graph. Thus, to solve the above task, we have to run these methods k times. In this paper, we present a method that directly solves the task, with an almost linear runtime no matter how large the value of k. Our method first constructs a hypergraph that encodes the betweenness centrality, and then computes the adaptive betweenness centrality by examining this graph. Our technique can be utilized to handle other centrality measures. We theoretically prove that our method is very accurate, and experimentally confirm that it is three orders of magnitude faster than previous methods. Relying on the scalability of our method, we experimentally demonstrate that strategies based on adaptive betweenness centrality are effective in important applications studied in the network science and database communities.
We analyse the impact of transient and Byzantine faults on the construction of a maximal matching in a general network. We consider the self-stabilizing algorithm called AnonyMatch presented by Cohen et al. [3] for co...
详细信息
ISBN:
(数字)9783030032326
ISBN:
(纸本)9783030032326;9783030032319
We analyse the impact of transient and Byzantine faults on the construction of a maximal matching in a general network. We consider the self-stabilizing algorithm called AnonyMatch presented by Cohen et al. [3] for computing such a matching. Since self-stabilization is transient fault tolerant, we prove that this algorithm still works under the more difficult context of arbitrary Byzantine faults. Byzantine nodes can prevent nodes close to them from taking part in the matching for an arbitrarily long time. We give bounds on their impact depending on the distance between a non-Byzantine node and the closest Byzantine, called the containment radius. We present the first algorithm tolerating both transient and Byzantine faults under the fair distributed daemon while keeping the best known containment radius. We prove this algorithm converges in O(max(Delta n, Delta(2)log n)) rounds w.h.p., where n and Delta are the size and the maximum degree of the network, resp.. Additionally, we improve the best known complexity as well as the best containment radius for this problem under the fair central daemon.
The paper studies distributed average consensus in sensor networks, when the sensors exchange quantized data at each time step. We show that randomizing the exchanged sensor data by adding a controlled amount of dithe...
详细信息
ISBN:
(纸本)9781424414833
The paper studies distributed average consensus in sensor networks, when the sensors exchange quantized data at each time step. We show that randomizing the exchanged sensor data by adding a controlled amount of dither results in almost sure (a.s.) convergence of the protocol, if the network is connected. We explicitly characterize the mean-squared error (with respect to the desired consensus average) and show that, by tuning certain parameters associated with the protocol, the mean-squared error can be made arbitrarily small. We study the trade-offs between the rate of convergence and the resulting mean-squared error. The sensor network topology plays an important role in determining the convergence rate of the algorithm. Our approach, based on the convergence of controlled Markov processes, is very generic and can be applied to many other situations of imperfect communication. Finally, we present numerical studies, which verify our theoretical results.
This paper considers time-varying uncertain constrained systems, and develops a method for computing a probabilistic output admissible (POA) set which consists of initial states probabilistically assured to satisfy th...
详细信息
ISBN:
(纸本)9788995003848
This paper considers time-varying uncertain constrained systems, and develops a method for computing a probabilistic output admissible (POA) set which consists of initial states probabilistically assured to satisfy the infinitetime constraint. The algorithm for the time-invariant case has already been presented in Hatanaka and Takaba (2006). This paper shows how this algorithm is extended to the time-varying case when an upper bound of a number called future output admissibility (FOA) index is available. We moreover present two methods for computing an upper bound of the FOA index;probabilistic and deterministic methods.
In the rendezvous problem, two computing entities (called agents) located at different vertices in a graph have to meet at the same vertex. In this paper, we consider the synchronous neighborhood rendezvous problem, w...
详细信息
ISBN:
(纸本)9781728170022
In the rendezvous problem, two computing entities (called agents) located at different vertices in a graph have to meet at the same vertex. In this paper, we consider the synchronous neighborhood rendezvous problem, where the agents are initially located at two adjacent vertices. While this problem can be trivially solved in O(Delta) rounds (Delta is the maximum degree of the graph), it is highly challenging to reveal whether that problem can be solved in o(Delta) rounds, even assuming the rich computational capability of agents. The only known result is that the time complexity of O(root n) rounds is achievable if the graph is complete and agents are probabilistic, asymmetric, and can use whiteboards placed at vertices. Our main contribution is to clarify the situation (with respect to computational models and graph classes) admitting such a sublinear-time rendezvous algorithm. More precisely, we present two algorithms achieving fast rendezvous additionally assuming bounded minimum degree, unique vertex identifier, and accessibility to neighborhood IDs. The first algorithm runs within (O) over tilde(root n Delta/delta + n/delta) rounds for graphs of the minimum degree larger than root n, where n is the number of vertices in the graph, and delta is the minimum degree of the graph. The second algorithm assumes that the largest vertex ID is O(n), and achieves (O) over tilde (n/root delta)-round time complexity without using whiteboards. These algorithms attain o(Delta)-round complexity in the case of delta = omega(root nlogn) and delta = omega(n(2/3) log(4/3) n) respectively. We also prove that three unconventional assumptions of our algorithm, bounded minimum degree, accessibility to neighborhood IDs, and initial distance one, are all inherently necessary for attaining fast rendezvous. That is, one can obtain the Omega(n)-round lower bound if either one of them is removed.
Locating arrays are designs used in combinatorial testing with the property that every set of d t-way interactions appears in a unique set of tests. Using a locating array to conduct fault testing ensures that faulty ...
详细信息
ISBN:
(纸本)9781728108889
Locating arrays are designs used in combinatorial testing with the property that every set of d t-way interactions appears in a unique set of tests. Using a locating array to conduct fault testing ensures that faulty interactions can be located when there are d or fewer faults. Locating arrays are fairly new and few techniques have been explored for their construction. Most of the available work is limited to finding only one fault. Known general methods require a covering array of strength t+ d and produce many more tests than are needed. We present Partitioned Search with Column Resampling (PSCR), a randomized computational search algorithmic framework to verify if an array is ((d) over bar, t)-locating by partitioning the search space to decrease the number of comparisons. If a candidate array is not locating, random resampling is performed until a locating array is constructed or an iteration limit is reached. Results are compared against known locating array constructions from covering arrays of higher strength and against published results of mixed level locating arrays for parameters of real-world systems. The use of PSCR to build larger locating arrays from a variety of ingredient arrays is explored.
Graph coloring is an important problem in computer science and engineering with numerous applications. As the size of data increases today, graphs with millions of nodes are becoming commonplace. Parallel graph colori...
详细信息
ISBN:
(纸本)9781538673089
Graph coloring is an important problem in computer science and engineering with numerous applications. As the size of data increases today, graphs with millions of nodes are becoming commonplace. Parallel graph coloring algorithms on high throughput graphics processing units (GPUs) have recently been proposed to color such large graphs efficiently. We present two new graph coloring algorithms for GPUs which improve upon existing algorithms both in coloring speed and quality. The first algorithm, counting-based Jones-Plassmann (CJP), uses counters to implement the classic Jones-Plassmann parallel coloring heuristic in a work-efficient manner. The second algorithm, conflict coloring (CC) achieves higher parallelism than CJP, and is based on optimistically coloring the graph using estimates of the chromatic number. We compared CC and CJP with two state-of-the-art GPU coloring algorithms, csrcolor [1] and Deveci et al's [2] vertex/edge-based algorithms (which we call VEB), as well as the sequential CPU algorithm ColPack [3]. In terms of coloring quality, CJP and CC are both far better than csrcolor, while CJP uses 10% fewer colors than VEB on average and CC uses 10% more. Compared to ColPack, CJP and CC use 1.3 x and 1.5x more colors on nonbipartite graphs, resp. In terms of speed, CJP is on average 1.5 - 2x faster than the other algorithms, while CC is 2.7 - 4.3x faster.
暂无评论