The approximability of the maximum edge-disjoint paths problem (EDP) in directed graphs was seemingly settled by an Omega(m(1/2-epsilon))-hardness result of Guruswami et al. [2003], and an O(root m) approximation achi...
详细信息
The approximability of the maximum edge-disjoint paths problem (EDP) in directed graphs was seemingly settled by an Omega(m(1/2-epsilon))-hardness result of Guruswami et al. [2003], and an O(root m) approximation achievable via a natural multiconunodity-flow-based LP relaxation as well as a greedy algorithm. Here m is the number of edges in the graph. We observe that the Omega(m(1/2-epsilon))-hardness of approximation applies to sparse graphs, and hence when expressed as a function of n, that is, the number of vertices, only an Omega(n(1/2-epsilon))-hardness follows. On the other hand, O(root m)-approximation algorithms do not guarantee a sublinear (in terms of n) approximation algorithm for dense graphs. We note that a similar gap exists in the known results on the integrality gap of the flow-based LP relaxation: an Omega(root n) lower bound and O(root m) upper bound. Motivated by this discrepancy in the upper and lower bounds, we study algorithms for EDP in directed and undirected graphs and obtain improved approximation ratios. We show that the greedy algorithm has an approximation ratio of O(min(n(2/3), root m)) in undirected graphs and a ratio of O(min(n(4/5), root m)) in directed graphs. For acyclic graphs we give an O(root n ln n) approximation via LP rounding. These are the first sublinear approximation ratios for EDP. The results also extend to EDP with weights and to the uniform-capacity unsplittable flow problem (UCUFP).
Using outward rotations, we obtain an approximation algorithm for Max-Bisection problem, i.e., partitioning the vertices of an undirected graph into two blocks of equal cardinality so as to maximize the weights of cro...
详细信息
Using outward rotations, we obtain an approximation algorithm for Max-Bisection problem, i.e., partitioning the vertices of an undirected graph into two blocks of equal cardinality so as to maximize the weights of crossing edges. In many interesting cases, the algorithm performs better than the algorithms of Ye and of Halperin and Zwick. The main tool used to obtain this result is semidefinite programming.
An important class of scheduling problems concerns parallel machines and precedence constraints. We consider precedence delays, which associate with each precedence constraint a certain amount of time that must elapse...
详细信息
An important class of scheduling problems concerns parallel machines and precedence constraints. We consider precedence delays, which associate with each precedence constraint a certain amount of time that must elapse between the completion and start times of the corresponding jobs. Together with ordinary precedence constraints, release dates and delivery times can be modeled in this manner. We present a 4-approximation algorithm for the total weighted completion time objective for this general class of problems. The algorithm is a rather simple form of list scheduling. The list is in order of job midpoints derived from a linear programming relaxation. Our analysis unifies and simplifies that of a number of special cases heretofore separately studied, while actually improving many of the former approximation results.
A k-decomposition of a tree is a process in which the tree is recursively partitioned into k edge-disjoint subtrees until each subtree contains only one edge. We investigated the problem how many levels it is sufficie...
详细信息
A k-decomposition of a tree is a process in which the tree is recursively partitioned into k edge-disjoint subtrees until each subtree contains only one edge. We investigated the problem how many levels it is sufficient to decompose the edges of a tree. In this paper, we show that any n-edge tree can be 2-decomposed (and 3-decomposed) within at most [1.44 log n] (and [log n] respectively) levels. Extreme trees are given to show that the bounds are asymptotically tight. Based on the result, we designed an improved approximation algorithm for the minimum ultrametric tree.
Histograms and related synopsis structures are popular techniques for approximating data distributions. These have been successful in query optimization and a variety of applications, including approximate querying, s...
详细信息
Histograms and related synopsis structures are popular techniques for approximating data distributions. These have been successful in query optimization and a variety of applications, including approximate querying, similarity searching, and data mining, to name a few. Histograms were a few of the earliest synopsis structures proposed and continue to be used widely. The histogram construction problem is to construct the best histogram restricted to a space bound that reflects the data distribution most accurately under a given error measure. The histograms are used as quick and easy estimates. Thus, a slight loss of accuracy, compared to the optimal histogram under the given error measure, can be offset by fast histogram construction algorithms. A natural question arises in this context: Can we find a fast near optimal approximation algorithm for the histogram construction problem? In this article, we give the first linear time (1+epsilon)-factor approximation algorithms ( for any epsilon > 0) for a large number of histogram construction problems including the use of piecewise small degree polynomials to approximate data, workloads, etc. Several of our algorithms extend to data streams. Using synthetic and real-life data sets, we demonstrate that in many scenarios the approximate histograms are almost identical to optimal histograms in quality and are significantly faster to construct.
作者:
Galluccio, ANobili, PCNR
Ist Anal Sistemi & Informat A Ruberti I-00185 Rome Italy Univ Lecce
Dipartimento Matemat I-73100 Lecce Italy CNR
IASI I-00185 Rome Italy
We provide a new LP relaxation of the maximum vertex cover problem and a polynomial-time algorithm that finds a solution within the approximation factor 1-1/(2 (q) over bar), where (q) over bar is the size of the smal...
详细信息
We provide a new LP relaxation of the maximum vertex cover problem and a polynomial-time algorithm that finds a solution within the approximation factor 1-1/(2 (q) over bar), where (q) over bar is the size of the smallest clique in a given clique-partition of the edge weighting of G. (c) 2005 Elsevier B.V. All rights reserved.
The multiple knapsack problem (MKP) is a natural and well-known generalization of the single knapsack problem and is defined as follows. We are given a set of n items and m bins ( knapsacks) such that each item i has ...
详细信息
The multiple knapsack problem (MKP) is a natural and well-known generalization of the single knapsack problem and is defined as follows. We are given a set of n items and m bins ( knapsacks) such that each item i has a profit p( i) and a size s( i), and each bin j has a capacity c( j). The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the generalized assignment problem ( GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APX-hard and a 2-approximation, for it is implicit in the work of Shmoys and Tardos [ Math. Program. A, 62 ( 1993), pp. 461 - 474], and thus far, this was also the best known approximation for MKP. The main result of this paper is a polynomial time approximation scheme (PTAS) for MKP. Apart from its inherent theoretical interest as a common generalization of the well-studied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APX-hard. We substantiate this by showing that slight generalizations of MKP are APX-hard. Thus our results help demarcate the boundary at which instances of GAP become APX-hard. An interesting aspect of our approach is a PTAS-preserving reduction from an arbitrary instance of MKP to an instance with O(log n) distinct sizes and profits.
Sequence comparison leads to a combinatorial optimization problem of sorting permutations by reversals and transpositions. Namely, given any two permutations, find the shortest distance between them. This problem is r...
详细信息
Sequence comparison leads to a combinatorial optimization problem of sorting permutations by reversals and transpositions. Namely, given any two permutations, find the shortest distance between them. This problem is related with genome rearrangement. The sorting of signed permutations is studied. Because in genome rearrangement, genes are oriented in DNA sequences. The transpositions which have been studied in the literature can be viewed as operations working on two consecutive segments of the genome. In this paper, a new kind of transposition which can work on two arbitrary segments of the genome is proposed, and the sorting of signed permutations by reversals and this new kind of transpositions are studied. After establishing a lower bound on the number of operations needed, a 2-approximation algorithm is presented for this problem and an example is given to show that the performance ratio of the algorithm cannot be improved.
We consider a generalization of the classical facility location problem, where we require the solution to be fault-tolerant. In this generalization, every demand point j must be served by r(j) facilities instead of ju...
详细信息
We consider a generalization of the classical facility location problem, where we require the solution to be fault-tolerant. In this generalization, every demand point j must be served by r(j) facilities instead of just one. The facilities other than the closest one are "backup" facilities for that demand, and any such facility will be used only if all closer facilities (or the links to them) fail. Hence, for any demand point, we can assign nonincreasing weights to the routing costs to farther facilities. The cost of assignment for demand j is the weighted linear combination of the assignment costs to its r(j) closest open facilities. We wish to minimize the sum of the cost of opening the facilities and the assignment cost of each demand j. We obtain a factor 4 approximation to this problem through the application of various rounding techniques to the linear relaxation of an integer program formulation. We further improve the approximation ratio to 3.16 using randomization and to 2.41 using greedy local-search type techniques. (C) 2003 Elsevier Inc. All rights reserved.
暂无评论