An extension of the disjoint set union problem is considered, where the extra primitive backtrack(i) can undo the last i unions not yet undone. Let n be the total number of elements in all the sets. A data structure i...
详细信息
An extension of the disjoint set union problem is considered, where the extra primitive backtrack(i) can undo the last i unions not yet undone. Let n be the total number of elements in all the sets. A data structure is presented that supports each union and find in O(log n/log log n) worst-case time and each backtrack(i) in O(1) worst-case time, irrespective of i. The total space required by the data structure is 0(n). A byproduct of this construction is a partially persistent data structure for the standard set union problem, capable of supporting union, find, and find-in-the-past operations, each in O(log n/log log n) worst-case time. All these upper bounds are tight for the class of separable-pointer algorithms as well as in the cell probe model of computation.
A matching in a graph is a set of edges no two of which share a common vertex. A matching M is an induced matching if no edge connects two edges of M. The problem of finding a maximum induced matching is known to be N...
详细信息
A matching in a graph is a set of edges no two of which share a common vertex. A matching M is an induced matching if no edge connects two edges of M. The problem of finding a maximum induced matching is known to be NP-Complete in general-and specifically for bipartite graphs and for 3-regular planar graphs. The problem has been shown to be polynomial for several classes of graphs. In this paper we generalize the results to:wider classes of graphs, and improve the time complexity of previously known results. (C) 2000 Elsevier Science B.V. All rights reserved.
We study the parameterized complexity of three NP-hard graph completion problems. The minimum fill-in problem asks if a graph can be triangulated by adding at most k edges. We develop O(c(k)m) and O(k(2)mn + f(k)) alg...
详细信息
We study the parameterized complexity of three NP-hard graph completion problems. The minimum fill-in problem asks if a graph can be triangulated by adding at most k edges. We develop O(c(k)m) and O(k(2)mn + f(k)) algorithms for this problem on a graph with n vertices and m edges. Here f(k) is exponential in k and the constants hidden by the big-O notation are small and do not depend on k. In particular, this implies that the problem is fixed-parameter tractable (FPT). The proper interval graph completion problem, motivated by molecular biology, asks if a graph can be made proper interval by adding no more than k edges. We show that the problem is FPT by providing a simple search-tree-based algorithm that solves it in O(c(k)m)-time. Similarly, we show that the parameterized version of the strongly chordal graph completion problem is FPT by giving an O(c(k)m log n)-time algorithm for it. All of our algorithms can actually enumerate all possible k-completions within the same time bounds.
We present a new linear algorithm for coloring the edges of a tree. Although this is not the first linear algorithm for the problem, our algorithm unlike the existing ones can be parallelized directly. The paralleliza...
详细信息
We present a new linear algorithm for coloring the edges of a tree. Although this is not the first linear algorithm for the problem, our algorithm unlike the existing ones can be parallelized directly. The parallelization is obtained by showing that edge-coloring can be carried out using tree contraction. Hence, it can be done in O(log n) time using n/log n processors on the EREW PRAM. When the problem is extended to one of coloring the edges of a graph with edge-disjoint cycles, the said algorithm for trees can be extended very easily without increasing the resource requirement.
Variants of classical data compression paradigms by Ziv, Lempel, and Welch are proposed in which the phrases used in compression are selected among suitably chosen strings of intermittently solid and wild characters p...
详细信息
Variants of classical data compression paradigms by Ziv, Lempel, and Welch are proposed in which the phrases used in compression are selected among suitably chosen strings of intermittently solid and wild characters produced by the auto-correlation of the sourcestring. Adaptations and extensions of the classical ZL78 paradigm as implemented by Welch are developed along these lines, and they are easily seen to be susceptible of simple linear time implementation. Both lossy and lossless schemata are considered, and preliminary analyses of performance are attempted. (c) 2007 Elsevier Inc. All rights reserved.
The problem of finding and keeping updated shortest paths in distributed networks is considered crucial in today's practical applications. In the recent past, there has been a renewed interest in devising new effi...
详细信息
The problem of finding and keeping updated shortest paths in distributed networks is considered crucial in today's practical applications. In the recent past, there has been a renewed interest in devising new efficient distance-vector algorithms as an attractive alternative to link-state solutions for large-scale Ethernet networks, in which scalability and reliability are key issues or the nodes can have limited storage capabilities. In this paper, we present Distributed Computation Pruning (DCP), a new technique, which can be combined with every distance-vector routing algorithm based on shortest paths, allowing to reduce the total number of messages sent by that algorithm and its space occupancy per node. To check its effectiveness, we combined the new technique with DUAL (Diffuse Update ALgorithm), one of the most popular distance-vector algorithm in the literature, which is part of CISCO's widely used EIGRP protocol, and with the recently introduced LFR (Loop Free Routing) which has been shown to have good performances on real networks. We give experimental evidence that these combinations lead to a significant gain both in terms of number of messages sent and of memory requirements per node.
In this paper, we define the minimax flow problem and design an O(k . M(n, m)) time optimal algorithm for a special case of the problem in which the weights on arcs are either O or 1, where n is the number of vertices...
详细信息
In this paper, we define the minimax flow problem and design an O(k . M(n, m)) time optimal algorithm for a special case of the problem in which the weights on arcs are either O or 1, where n is the number of vertices, m is the number of arcs, k (where 1 less than or equal to k less than or equal to m) is the number of arcs with nonzero weights, and M(n;m) is the best time bound for finding a maximum flow in a network.
The computation of geodesic distances or paths between two points on triangulated meshes is a common operation in many computer graphics applications. In this article, we present an exact algorithm for the single-sour...
详细信息
The computation of geodesic distances or paths between two points on triangulated meshes is a common operation in many computer graphics applications. In this article, we present an exact algorithm for the single-source all-vertices shortest path problem. Mitchell et al. [1987] proposed an O(n(2) log n) method (MMP), based on Dijkstra's algorithm, where n is the complexity of the polyhedral surface. Then, Chen and Han [1990] (CH) improved the running time to O(n(2)). Interestingly Surazhsky et al. [2005] provided experimental evidence demonstrating that the MMP algorithm runs many times faster, in practice, than the CH algorithm. The CH algorithm encodes the structure of the set of shortest paths using a set of windows on the edges of the polyhedron. Our experiments showed that in many examples over 99% of the windows created by the CH algorithm are of no use to define a shortest path. So this article proposes to improve the CH algorithm by two separate techniques. One is to filter out useless windows using the current estimates of the distances to the vertices, the other is to maintain a priority queue like that achieved in Dijkstra's algorithm. Our experimental results suggest that the improved CH algorithm, in spite of an O(n(2) log n) asymptotic time complexity, greatly outperforms the original CH algorithm in both time and space. Furthermore, it generally runs faster than the MMP algorithm and uses considerably less space.
We study the problem of finding a longest common increasing subsequence (LCIS) of multiple sequences of numbers. The LCIS problem is a fundamental issue in various application areas, including the whole genome alignme...
详细信息
We study the problem of finding a longest common increasing subsequence (LCIS) of multiple sequences of numbers. The LCIS problem is a fundamental issue in various application areas, including the whole genome alignment. In this paper we give an efficient algorithm to find the LCIS of two sequences in O(min(r log l, nl +r) log log n+Sort(n)) time where n is the length of each sequence and r is the number of ordered pairs of positions at which the two sequences match, e is the length of the LCIS, and Sort(n) is the time to sort n numbers. For in sequences where m >= 3, we find the LCIS in O(min(mr(2), r log l log(m) r) + ***(n)) time where r is the total number of m-tuples of positions at which the m sequences match. The previous results find the LCIS of two sequences in O(n(2)) and O(nl log log n+Sort(n)) time. Our algorithm is faster when r is relatively small, e.g., for r < min(n(2)/(log l log log n), nl/ log l).
One of the main challenges in algorithmic mechanism design is to turn (existing) efficient algorithmic Solutions into efficient truthful mechanisms. Building a truthful mechanism is indeed a difficult process since th...
详细信息
One of the main challenges in algorithmic mechanism design is to turn (existing) efficient algorithmic Solutions into efficient truthful mechanisms. Building a truthful mechanism is indeed a difficult process since the underlying algorithm must obey certain "monotonicity" properties and suitable payment functions need to be computed (this task usually represents the bottleneck in the overall time complexity). We provide a general technique for building truthful mechanisms that provide optimal solutions in strongly polynomial time. We show that the entire mechanism can be obtained if one is able to express/write a strongly polynomial-time algorithm (for the corresponding optimization problem) as a "suitable combination" of simpler algorithms. This approach applies to a wide class of mechanism design graph problems, where each selfish agent corresponds to a weighted edge in a graph (the weight of the edge is the cost of using that edge). Our technique can be applied to several optimization problems which prior results cannot handle (e.g., MIN-MAX optimization problems). As an application, we design the first (strongly polynomial-time) truthful mechanism for the minimum diameter spanning tree problem, by obtaining it directly from an existing algorithm for solving this problem. For this non-utilitarian MIN-MAX problem, no truthful mechanism was known, even considering those running in exponential time (indeed, exact algorithms do not necessarily yield truthful mechanisms). Also, standard techniques for payment computations may result in a running time which is not polynomial in the size of the input graph. The overall running time of our mechanism, instead, is polynomial in the number n of nodes and in of edges, and it is only a factor O(n alpha(n, n)) away from the best known canonical centralized algorithm. (C) 2009 Elsevier B.V. All rights reserved.
暂无评论