In this paper, we provide bounds for the expected value of the log of the condition number C(A) of a linear feasibility problem given by a n x m matrix A (Ref. 1). We show that this expected value is O(min{n, m log n}...
详细信息
In this paper, we provide bounds for the expected value of the log of the condition number C(A) of a linear feasibility problem given by a n x m matrix A (Ref. 1). We show that this expected value is O(min{n, m log n}) if n > m and is \cat O(log n) otherwise. A similar bound applies for the log of the condition number C-R(A) introduced by Renegar (Ref. 2).
We take a multivariate view of digital search trees by studying the number of nodes of different types that may coexist in a bucket digital search tree as it grows under an arbitrary memory management system. we obtai...
详细信息
We take a multivariate view of digital search trees by studying the number of nodes of different types that may coexist in a bucket digital search tree as it grows under an arbitrary memory management system. we obtain the mean of each type of node, as well as the entire covariance matrix between types, whereupon weak laws of large numbers follow from the orders of magnitude (the norming constants include oscillating functions). The result can be easily interpreted for practical systems like paging, heaps and UNIX's buddy system. The covariance results call for developing a Mellin convolution method, where convoluted numerical sequences are handled by convolutions of their Mellin transforms. Furthermore, we use a method of moments to show that the distribution is asymptotically normal. The method of proof is of some generality and is applicable to other parameters like path length and size in random tries and Patricia tries. (C) 2002 Elsevier Science (USA). All rights reserved.
We introduce a subclass of NP optimization problems which contains some NP-hard problems, e.g., bin covering and bin packing. For each problem in this subclass we prove that with probability tending to 1 (exponentiall...
详细信息
We introduce a subclass of NP optimization problems which contains some NP-hard problems, e.g., bin covering and bin packing. For each problem in this subclass we prove that with probability tending to 1 (exponentially fast as the number of input items tends to infinity), the problem is approximately up to any chosen relative error bound epsilon > 0 by a deterministic finite-state machine. More precisely, let Pi be a problem in our subclass of NP optimization problems, let epsilon > 0 be any chosen bound, and assume there is a fixed (but arbitrary) probability distribution for the inputs. Then there exists a finite-state machine which does the following: On an input I (random according to this probability distribution), the finite-state machine produces a feasible solution whose objective value M (I) satisfies [GRAPHICS] when n is large enough. Here k and h are positive constants. All rights reserved. (C) 2001 Elsevier Science B.V.
The random assignment (or bipartite matching) problem asks about A, min(pi) Sigma (n)(i=1) c(i, pi (i)), where (c(i, j)) is a n x n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is ...
详细信息
The random assignment (or bipartite matching) problem asks about A, min(pi) Sigma (n)(i=1) c(i, pi (i)), where (c(i, j)) is a n x n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is over permutations pi. Mezard and Parisi (1987) used the replica method from statistical physics to argue nonrigorously that EA(n) --> zeta (2) = pi (2)/6. Aldous (1992) identified the limit in terms of a matching problem on a limit infinite tree. Herl we construct the optimal matching on the infinite tree. This yields a rigorous proof of the zeta (2) limit and of the conjectured limit distribution od edge-costs and their rank-orders in the optimal matching. It also yields the asymptotic essential uniqueness property: every almost-optimal matching coincides with the optimal matching except on a small proportion of edges. (C) 2001 John Wiley & Sons, Inc.
We modify the k-d tree on [0, 1](d) by always cutting the longest edge instead of rotating through the coordinates. This modi cation makes the expected time behavior of lower-dimensional partial match queries behave a...
详细信息
We modify the k-d tree on [0, 1](d) by always cutting the longest edge instead of rotating through the coordinates. This modi cation makes the expected time behavior of lower-dimensional partial match queries behave as perfectly balanced complete k-d trees on n nodes. This is in contrast to a result of Flajolet and Puech [J. Assoc. Comput. Mach., 33 (1986), pp. 371-407], who proved that for (standard) random k-d trees with cuts that rotate among the coordinate axes, the expected time behavior is much worse than for balanced complete k-d trees. We also provide results for range searching and nearest neighbor search for our trees.
We modify the k-d tree on [0,1][d] by always cutting the longest edge instead of rotating through the coordinates. This modification makes the expected time behavior of lower-dimensional partial match queries behave a...
详细信息
We modify the k-d tree on [0,1][d] by always cutting the longest edge instead of rotating through the coordinates. This modification makes the expected time behavior of lower-dimensional partial match queries behave as perfectly balanced complete k-d trees on n nodes. This is in contrast to a result of Flajolet and Puech [ J. Assoc. Comput. Mach., 33 (1986), pp. 371--407], who proved that for (standard) random k-d trees with cuts that rotate among the coordinate axes, the expected time behavior is much worse than for balanced complete k-d trees. We also provide results for range searching and nearest neighbor search for our trees.
作者:
Leighton, TMa, YAMIT
Dept Math Cambridge MA 02139 USA MIT
Comp Sci Lab Cambridge MA 02139 USA
We study networks that can sort n items even when a large number of the comparators in the network are faulty. We restrict our attention to networks that consist of registers, comparators, and replicators. (Replicator...
详细信息
We study networks that can sort n items even when a large number of the comparators in the network are faulty. We restrict our attention to networks that consist of registers, comparators, and replicators. (Replicators are used to copy an item from one register to another, and they are assumed to be fault free.) We consider the scenario of both random and worst-case comparator faults, and we follow the general model of destructive comparator failure proposed by Assaf and Upfal [Proc. 31st IEEE Symposium on Foundations of Computer Science, St. Louis, MO, 1990, pp. 275-284] in which the two outputs of a faulty comparator can fail independently of each other. In the case of random faults, Assaf and Upfal showed how to construct a network with O(n log(2) n) comparators that (with high probability) can sort n items even if a constant fraction of the comparators are faulty. Whether the bound on the number of comparators can be improved (to, say, O(n log n)) for sorting (or merging) has remained an interesting open question. We resolve this question in this paper by proving that any n-item sorting or merging network which can tolerate a constant fraction of random failures has Omega(n log(2) n) comparators. In the case of worst-case faults, we show that Omega(kn log n) comparators are necessary to construct a sorting or merging network that can tolerate up to k worst-case faults. We also show that this bound is tight for k = O(log n). The lower bound is particularly significant since it formally proves that the cost of being tolerant to worst-case failures is very high. Both the lower bound for random faults and the lower bound for worst-case faults are the first nontrivial lower bounds on the size of a fault-tolerant sorting or merging network.
This short note considers the problem of point location in a Delaunay triangulation of n random points, using no additional preprocessing or storage other than a standard data structure representing the triangulation....
详细信息
This short note considers the problem of point location in a Delaunay triangulation of n random points, using no additional preprocessing or storage other than a standard data structure representing the triangulation. A simple and easy-to-implement (but, of course, worst-case suboptimal) heuristic is shown to take expected time O(n(1/3)).
The purpose of this paper is to analyze the maxima properties (value and position) of some data structures. Our theorems concern the distribution of these random variables. Previously known results usually dealt with ...
详细信息
The purpose of this paper is to analyze the maxima properties (value and position) of some data structures. Our theorems concern the distribution of these random variables. Previously known results usually dealt with the mean and sometimes the variance of the random variables. Many of our results rely on diffusion techniques. This is a very powerful tool that has already been used with some success in algorithm complexity analysis.
The algorithm LDM (largest differencing method) divides a list of n random items into two blocks. The parameter of interest is the expected difference between the two block sums. It is shown that if the items are i.i....
详细信息
The algorithm LDM (largest differencing method) divides a list of n random items into two blocks. The parameter of interest is the expected difference between the two block sums. It is shown that if the items are i.i.d. and uniform then the rate of convergence of this parameter to zero is n(-Theta(log n)). An algorithm for balanced partitioning is constructed, with the same rate of convergence to zero.
暂无评论