We introduce a subclass of NP optimization problems which contains some NP-hard problems, e.g., bin covering and bin packing. For each problem in this subclass we prove that with probability tending to 1 (exponentiall...
详细信息
We introduce a subclass of NP optimization problems which contains some NP-hard problems, e.g., bin covering and bin packing. For each problem in this subclass we prove that with probability tending to 1 (exponentially fast as the number of input items tends to infinity), the problem is approximately up to any chosen relative error bound epsilon > 0 by a deterministic finite-state machine. More precisely, let Pi be a problem in our subclass of NP optimization problems, let epsilon > 0 be any chosen bound, and assume there is a fixed (but arbitrary) probability distribution for the inputs. Then there exists a finite-state machine which does the following: On an input I (random according to this probability distribution), the finite-state machine produces a feasible solution whose objective value M (I) satisfies [GRAPHICS] when n is large enough. Here k and h are positive constants. All rights reserved. (C) 2001 Elsevier Science B.V.
The random assignment (or bipartite matching) problem asks about A, min(pi) Sigma (n)(i=1) c(i, pi (i)), where (c(i, j)) is a n x n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is ...
详细信息
The random assignment (or bipartite matching) problem asks about A, min(pi) Sigma (n)(i=1) c(i, pi (i)), where (c(i, j)) is a n x n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is over permutations pi. Mezard and Parisi (1987) used the replica method from statistical physics to argue nonrigorously that EA(n) --> zeta (2) = pi (2)/6. Aldous (1992) identified the limit in terms of a matching problem on a limit infinite tree. Herl we construct the optimal matching on the infinite tree. This yields a rigorous proof of the zeta (2) limit and of the conjectured limit distribution od edge-costs and their rank-orders in the optimal matching. It also yields the asymptotic essential uniqueness property: every almost-optimal matching coincides with the optimal matching except on a small proportion of edges. (C) 2001 John Wiley & Sons, Inc.
We modify the k-d tree on [0, 1](d) by always cutting the longest edge instead of rotating through the coordinates. This modi cation makes the expected time behavior of lower-dimensional partial match queries behave a...
详细信息
We modify the k-d tree on [0, 1](d) by always cutting the longest edge instead of rotating through the coordinates. This modi cation makes the expected time behavior of lower-dimensional partial match queries behave as perfectly balanced complete k-d trees on n nodes. This is in contrast to a result of Flajolet and Puech [J. Assoc. Comput. Mach., 33 (1986), pp. 371-407], who proved that for (standard) random k-d trees with cuts that rotate among the coordinate axes, the expected time behavior is much worse than for balanced complete k-d trees. We also provide results for range searching and nearest neighbor search for our trees.
We modify the k-d tree on [0,1][d] by always cutting the longest edge instead of rotating through the coordinates. This modification makes the expected time behavior of lower-dimensional partial match queries behave a...
详细信息
We modify the k-d tree on [0,1][d] by always cutting the longest edge instead of rotating through the coordinates. This modification makes the expected time behavior of lower-dimensional partial match queries behave as perfectly balanced complete k-d trees on n nodes. This is in contrast to a result of Flajolet and Puech [ J. Assoc. Comput. Mach., 33 (1986), pp. 371--407], who proved that for (standard) random k-d trees with cuts that rotate among the coordinate axes, the expected time behavior is much worse than for balanced complete k-d trees. We also provide results for range searching and nearest neighbor search for our trees.
作者:
Leighton, TMa, YAMIT
Dept Math Cambridge MA 02139 USA MIT
Comp Sci Lab Cambridge MA 02139 USA
We study networks that can sort n items even when a large number of the comparators in the network are faulty. We restrict our attention to networks that consist of registers, comparators, and replicators. (Replicator...
详细信息
We study networks that can sort n items even when a large number of the comparators in the network are faulty. We restrict our attention to networks that consist of registers, comparators, and replicators. (Replicators are used to copy an item from one register to another, and they are assumed to be fault free.) We consider the scenario of both random and worst-case comparator faults, and we follow the general model of destructive comparator failure proposed by Assaf and Upfal [Proc. 31st IEEE Symposium on Foundations of Computer Science, St. Louis, MO, 1990, pp. 275-284] in which the two outputs of a faulty comparator can fail independently of each other. In the case of random faults, Assaf and Upfal showed how to construct a network with O(n log(2) n) comparators that (with high probability) can sort n items even if a constant fraction of the comparators are faulty. Whether the bound on the number of comparators can be improved (to, say, O(n log n)) for sorting (or merging) has remained an interesting open question. We resolve this question in this paper by proving that any n-item sorting or merging network which can tolerate a constant fraction of random failures has Omega(n log(2) n) comparators. In the case of worst-case faults, we show that Omega(kn log n) comparators are necessary to construct a sorting or merging network that can tolerate up to k worst-case faults. We also show that this bound is tight for k = O(log n). The lower bound is particularly significant since it formally proves that the cost of being tolerant to worst-case failures is very high. Both the lower bound for random faults and the lower bound for worst-case faults are the first nontrivial lower bounds on the size of a fault-tolerant sorting or merging network.
This short note considers the problem of point location in a Delaunay triangulation of n random points, using no additional preprocessing or storage other than a standard data structure representing the triangulation....
详细信息
This short note considers the problem of point location in a Delaunay triangulation of n random points, using no additional preprocessing or storage other than a standard data structure representing the triangulation. A simple and easy-to-implement (but, of course, worst-case suboptimal) heuristic is shown to take expected time O(n(1/3)).
The purpose of this paper is to analyze the maxima properties (value and position) of some data structures. Our theorems concern the distribution of these random variables. Previously known results usually dealt with ...
详细信息
The purpose of this paper is to analyze the maxima properties (value and position) of some data structures. Our theorems concern the distribution of these random variables. Previously known results usually dealt with the mean and sometimes the variance of the random variables. Many of our results rely on diffusion techniques. This is a very powerful tool that has already been used with some success in algorithm complexity analysis.
The algorithm LDM (largest differencing method) divides a list of n random items into two blocks. The parameter of interest is the expected difference between the two block sums. It is shown that if the items are i.i....
详细信息
The algorithm LDM (largest differencing method) divides a list of n random items into two blocks. The parameter of interest is the expected difference between the two block sums. It is shown that if the items are i.i.d. and uniform then the rate of convergence of this parameter to zero is n(-Theta(log n)). An algorithm for balanced partitioning is constructed, with the same rate of convergence to zero.
In the k-dimensional packing problem, we are given a set I=(b(1), b(2),..., b(n)) of k-dimensional boxes and a k-dimensional box B with unit length in each of the first k-1 dimensions and unbounded length in the kth d...
详细信息
In the k-dimensional packing problem, we are given a set I=(b(1), b(2),..., b(n)) of k-dimensional boxes and a k-dimensional box B with unit length in each of the first k-1 dimensions and unbounded length in the kth dimension. Each box b(i) is represented by a k-tuple b(i) = (x(i)((1)),..., x(i)((k-1)), x(i)((k))) is an element of (0, 1](k-1) X (0, infinity), where x((j)) denotes its length in the jth dimension, 1 less than or equal to j less than or equal to k. We are asked to find a packing of I into B such that each box is packed orthogonally and oriented in all k dimensions and such that the height in the kth dimension of the packing is minimized. The k-dimensional packing problem is known to be NP-hard for each k > 1. In this note, we study the average-case behavior of a class of algorithms, which includes any optimal algorithm and an on-line algorithm. Let A denote an algorithm in this class. Assume that b(1), b(2),..., b(n) are independent, identically distributed according to a distribution F(x((1)),.., x((k-1)), x((k))) over (0, 1](k-1) X (0, infinity), and the marginal distribution F-k of x((k)) satisfies the property that there is a positive number Lu at which the moment generating function M(Fk)(t) has a finite value C-a > 0. It is shown that for each given s > 0, there is an N-s,N-F > 0 such that for all n greater than or equal to N-s,N-F, Pr(\A(b(1),..., b(n))/n - Gamma\>s) < (2 + C-a)exp(-(sa/3)(2/3)n(1/3)), where Gamma = lim(n-->infinity)E[A(b(1),..., b(n))]/n and A(b(1),..., b(n)) denotes the height in the kth dimension of the packing of (b(1),..., b(n)) produced by A.
We investigate in this paper ‘natural’ distributions for the satisfiability problem (SAT) of prepositional logic, using concepts previously introduced by to study the average-case complexity of NP-complete problems....
详细信息
We investigate in this paper ‘natural’ distributions for the satisfiability problem (SAT) of prepositional logic, using concepts previously introduced by to study the average-case complexity of NP-complete problems. Gurevich showed that a problem with aflatdistribution is not DistNP complete (for deterministic reductions), unless DEXPTIme ≠ NEXPTlme. We express the known results concerningfixed sizeandfixed densitydistributions for CNF in the framework of average-case complexity and show that all these distributions are flat. We introduce the family of symmetric distributions, which generalizes those mentioned before, and show that bounded symmetric distributions on ordered tuples of clauses (CNFTupIes) and onk-CNF (sets ofk-literal-clauses), are flat. This eliminates all these distributions as candidates for ‘provably hard’ (i.e. DistNP complete) distributions for SAT, if one considers only deterministic reductions. Given the (presumed) naturalness and generality of these distributions, this result supports evidence that (at least polynomial-time, no-error) randomized reductions are appropriate in average-case complexity. We also observe, that there are non-flat distributions for which SAT is polynomial on the average, but that this is due to the particular choice of the size functions. Finally, Chváal and Szemerédi have shown that for certain fixed size distributions (which are also flat) resolution is exponential for almost all instances. We use this to show that every resolution algorithm will need at least exp(nα) (for any 0 ≤ α ≤ 1) time on the average. In other words, resolution-based algorithms will not establish that SAT, with these distributions, is in AverP.
暂无评论