A well-known result by Rabin implies that n - 1 polynomial tests are necessary and sufficient in the worst case to find the maximum of n distinct real numbers. In this note we show that, for any fixed constant c > ...
详细信息
A well-known result by Rabin implies that n - 1 polynomial tests are necessary and sufficient in the worst case to find the maximum of n distinct real numbers. In this note we show that, for any fixed constant c > 0, there is a randomized algorithm with error probability O(n(-c)) for finding the maximum of n distinct real numbers using only O((log n)(2)) polynomial tests.
The author deduces some new probabilistic estimates on the distances between the zeros of a polynomial p(chi) by using some properties of the discriminant of p(chi) and applies these estimates to improve the fastest d...
详细信息
The author deduces some new probabilistic estimates on the distances between the zeros of a polynomial p(chi) by using some properties of the discriminant of p(chi) and applies these estimates to improve the fastest deterministic algorithm for approximating polynomial factorization over the complex field. Namely given a natural n, positive epsilon, such that log(1/epsilon) = O(n logn), and the complex coefficients of a polynomial p(chi) = Sigma(i=0)(n)p(i) chi(i), such that p(n) not equal 0, Sigma(i)\pi\less than or equal to 1, a factorization of p(chi) (within the error norm epsilon) is computed as a product of factors of degrees at most n/2, by using O(log(2) n) time and n(3) processors under the PRAM arithmetic model of parallel computing or by using O(n(2)log(2) n) arithmetic operations. The algorithm is randomized, of Las Vegas type, allowing a failure with a probability at most delta, for any positive delta < 1 such that log(1/delta) = O(log n). Except for a narrow class of polynomials p(chi), these results can be also obtained for a such that log(1/epsilon) = O(n(2) log n).
Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient ''simulation'' of randomized on-line algorithms ...
详细信息
Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient ''simulation'' of randomized on-line algorithms by deterministic ones, which is best possible in general. The proof of the upper bound is existential. We deal with the issue of computing the efficient deterministic algorithm, and show that this is possible in very general cases.
The notions of ''k-wise epsilon-dependent'' and ''k-wise epsilon-biased'' are somewhat weaker in randomness than those of independent random variables. Random variables with these prope...
详细信息
The notions of ''k-wise epsilon-dependent'' and ''k-wise epsilon-biased'' are somewhat weaker in randomness than those of independent random variables. Random variables with these properties could be substitutes for independent random variables of randomized algorithms. In this paper, after giving relevant definitions for these notions, the relationship between these notions is presented: For any integer k and n with 1 less-than-or-equal-to k less-than-or-equal-to n, if a system of n random variables is k-wise epsilon-biased, then it is k-wise 4(1 - 2-k)epsilon-dependent in maximum norm and k-wise 2(1 - 2-k)epsilon-dependent in L1 norm with respect to the uniform distribution. It has been presented, in literature, that k-wise epsilon-biased random variables are substituted for k-wise delta-dependent random variables in many randomized algorithms, so the results of this paper are expected to reduce the running time of resultant algorithms due to derandomization.
This paper studies the wait-free consensus problem in the asynchronous shared memory model. In this model, processors communicate by shared registers that allow atomic read and write operations (but do not support ato...
详细信息
This paper studies the wait-free consensus problem in the asynchronous shared memory model. In this model, processors communicate by shared registers that allow atomic read and write operations (but do not support atomic test-and-set). It is known that the wait-free consensus problem cannot be solved by deterministic protocols. A randomized solution is presented. This protocol is simple, constructive, tolerates up to n - 1 processors crashes (where n is the number of processors), and its expected run-time is O(n2).
A paradigm for the design and analysis of randomized algorithms is introduced. The paradigm, called ''designing by expectation'' is a scheme by which one can design an algorithm according to its expect...
详细信息
A paradigm for the design and analysis of randomized algorithms is introduced. The paradigm, called ''designing by expectation'' is a scheme by which one can design an algorithm according to its expected behavior in intermediate steps.
This paper is concerned with recurrence relations that arise frequently in the analysis of divide-and-conquer algorithms. In order to solve a problem instance of size x, such an algorithm invests an amount of work a(x...
详细信息
This paper is concerned with recurrence relations that arise frequently in the analysis of divide-and-conquer algorithms. In order to solve a problem instance of size x, such an algorithm invests an amount of work a(x) to break the problem into subproblems of sizes h1(x), h2(x),...,h(k)(x), and then proceeds to solve the subproblems. Our particular interest is in the case where the sizes h(i)(x) are random variables;this may occur either because of randomization within the algorithm or because the instances to be solved are assumed to be drawn from a probability distribution. When the h(i) are random variables the running time of the algorithm on instances of size x is also a random variable T(x). We give several easy-to-apply methods for obtaining fairly tight bounds on the upper tails of the probability distribution of T(x), and present a number of typical applications of these bounds to the analysis of algorithms. The proofs of the bounds are based on an interesting analysis of optimal strategies in certain gambling games.
We consider the well-known RAS algorithm for the problem of positive matrix scaling. We give a new bound on the number of iterations of the method for scaling a given d-dimensional matrix A to a prescribed accuracy. A...
详细信息
We consider the well-known RAS algorithm for the problem of positive matrix scaling. We give a new bound on the number of iterations of the method for scaling a given d-dimensional matrix A to a prescribed accuracy. Although the RAS method is not a polynomial-time algorithm even for d = 2, our bound implies that for any dimension d the method is a fully polynomial-time approximation scheme. We also present a randomized variant of the algorithm whose (expected) running time improves that of the deterministic method by a factor of d.
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many mor...
详细信息
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many more constraints than variables. In particular, we give a randomized algorithm for solving convex quadratic and linear programs, which uses that scheme together with a variant of Karmarkar's interior point method. For problems with n constraints, d variables, and input length L, if n = OMEGA(d2), the expected total number of major Karmarkar's iterations is O(d2(log n)L), compared to the best known deterministic bound of O(square-root n L). We also present several other results which follow from the general scheme.
It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with ''almost'' equal probability. They are ...
详细信息
It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with ''almost'' equal probability. They are called epsilon-biased random variables. The number of random bits needed to generate the random variables is O(log n + log 1/epsilon). Thus, if epsilon is polynomially small, then the size of the sample space is also polynomial. Random variables that are epsilon-biased can be used to construct ''almost'' k-wise independent random variables where epsilon is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: Many randomized algorithms that require only k-wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using epsilon-biased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circuits. The smallest known family for such testing is provided. 4. Communication complexity: Two parties can verify equality of strings with high probability exchanging only a logarithmic number of bits. 5. Hash functions: A polynomial sized family of hash functions such that with high probability the sum of a random function over two different sets is not equal can be constructed.
暂无评论