This paper presents a parallel randomized algorithm which computes a pair of epsilon-optimal strategies for a given (m,n)matrix game A = [a(ij)] is an element of [-1, 1] in 0(epsilon(-2) log(2)(n+m)) expected time on ...
详细信息
This paper presents a parallel randomized algorithm which computes a pair of epsilon-optimal strategies for a given (m,n)matrix game A = [a(ij)] is an element of [-1, 1] in 0(epsilon(-2) log(2)(n+m)) expected time on an (n+m)/log(n+m)-processor EREW PRAM. For any fixed accuracy epsilon > 0, the expected sequential running time of the suggested algorithm is 0((n + m)log(n + m)), which is sublinear in mn, the number of input elements of A. On the other hand, simple arguments are given to show that for epsilon < 1/2, any deterministic algorithm for computing a pair of epsilon-optimal strategies of an (m, n)-mabix game A with +/-1 elements examines a(mn) of its elements. In particular, for m = n the randomized algorithm achieves an almost quadratic expected speedup relative to any deterministic method.
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a s...
详细信息
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a similar result does not always hold in the common model of distributed computation, the model in which all the processors run the same program (which may depend on the processors' input). We therefore construct a new technique that together with Yao's method enables us to show that in many cases, a similar relationship does hold in the distributed model. This relationship enables us to carry over known lower bounds an the complexity of deterministic computations to the realm of randomized computations, thus obtaining new results. The new technique can also be used for obtaining results concerning algorithms with bounded error.
randomized algorithms are algorithms that employ randomness in their solution method. We show that the performance of randomized algorithms is less affected by factors that prevent most parallel deterministic algorith...
详细信息
randomized algorithms are algorithms that employ randomness in their solution method. We show that the performance of randomized algorithms is less affected by factors that prevent most parallel deterministic algorithms from attaining their theoretical speedup bounds. A major reason is that the mapping of randomized algorithms onto multiprocessors involves very little scheduling or communication overhead. Furthermore, reliability is enhanced because the failure of a single processor leads only to degradation, not failure, of the algorithm. We present results of an extensive simulation done on a multiprocessor simulator, running a randomized branch-and-bound algorithm. The particular case we consider is the knapsack problem, due to its ease of formulation. We observe the largest speedups in precisely those problems that take large amounts of time to solve.
We study the randomized k-server problem on metric spaces consisting of widely separated subspaces. We give a method which extends existing algorithms to larger spaces with the growth rate of the competitive quotients...
详细信息
We study the randomized k-server problem on metric spaces consisting of widely separated subspaces. We give a method which extends existing algorithms to larger spaces with the growth rate of the competitive quotients being at most O(logk). This method yields o(k)-competitive algorithms solving the randomized k-server problem for some special underlying metric spaces, e.g. HSTs of "small" height (but unbounded degree). HSTs are important tools for probabilistic approximation of metric spaces. (C) 2009 Elsevier B.V. All rights reserved.
In this note we compare the randomized extended Kaczmarz (EK) algorithm and randomized coordinate descent (CD) for solving the full-rank overdetermined linear least-squares problem and prove that CD needs fewer operat...
详细信息
In this note we compare the randomized extended Kaczmarz (EK) algorithm and randomized coordinate descent (CD) for solving the full-rank overdetermined linear least-squares problem and prove that CD needs fewer operations for satisfying the same residual-related termination criteria. For the general least-squares problems, we show that first running CD to compute the residual and then standard Kaczmarz on the resulting consistent system is more efficient than EK.
In this paper, we formulate and solve a randomized optimal consensus problem for multi-agent systems with stochastically time-varying interconnection topology. The considered multi-agent system with a simple randomize...
详细信息
In this paper, we formulate and solve a randomized optimal consensus problem for multi-agent systems with stochastically time-varying interconnection topology. The considered multi-agent system with a simple randomized iterating rule achieves an almost sure consensus meanwhile solving the optimization problem min(z is an element of Rd) Sigma(n)(i=1) f(i)(z), in which the optimal solution set of objective function f(i) can only be observed by agent i itself. At each time step, simply determined by a Bernoulli trial, each agent independently and randomly chooses either taking an average among its neighbor set, or projecting onto the optimal solution set of its own optimization component. Both directed and bidirectional communication graphs are studied. Connectivity conditions are proposed to guarantee an optimal consensus almost surely with proper convexity and intersection assumptions. The convergence analysis is carried out using convex analysis. We compare the randomized algorithm with the deterministic one via a numerical example. The results illustrate that a group of autonomous agents can reach an optimal opinion by each node simply making a randomized trade-off between following its neighbors or sticking to its own opinion at each time step. (c) 2012 Elsevier Ltd. All rights reserved.
In this paper we present a randomized algorithm for the multipacket routing problem on an n x n mesh. The algorithm completes with high probability in at most kn + o(kn) parallel communication steps, with a queue size...
详细信息
In this paper we present a randomized algorithm for the multipacket routing problem on an n x n mesh. The algorithm completes with high probability in at most kn + o(kn) parallel communication steps, with a queue size of k + o(k). The previous best known algorithm (Kunde and Tensi, J. Parallel Disfrib. Comput. 11 (1991), 146-155) takes (5/4) kn + O(kn/f(n)) steps with a queue size of O(kf(n)) (for any 1 less than or equal to f(n) less than or equal to n). The algorithm that we will present is optimal with respect to queue size. The time bound is within a factor of 2 of the known lower bound. (c) 1995 Academic Press, Inc.
In this work, a novel rank-revealing matrix decomposition algorithm termed Compressed randomized UTV (CoR-UTV) decomposition along with a CoR-UTV variant aided by the power method technique is proposed. CoR-UTV comput...
详细信息
ISBN:
(纸本)9781479981311
In this work, a novel rank-revealing matrix decomposition algorithm termed Compressed randomized UTV (CoR-UTV) decomposition along with a CoR-UTV variant aided by the power method technique is proposed. CoR-UTV computes an approximation to a low-rank input matrix by making use of random sampling schemes. Given a large and dense matrix of size m x n with numerical rank k, where k << min{m, n}, CoR-UTV requires a few passes over the data, and runs in O (mnk) floating-point operations. Furthermore, CoR-UTV can exploit modern computational platforms and can be optimized formaximum efficiency. CoR-UTV is also applied for solving robust principal component analysis problems. Simulations show that CoR-UTV outperform existing approaches.
Although the methods of bagging and random forests are some of the most widely used prediction methods, relatively little is known about their algorithmic convergence. In particular, there are not many theoretical gua...
详细信息
Although the methods of bagging and random forests are some of the most widely used prediction methods, relatively little is known about their algorithmic convergence. In particular, there are not many theoretical guarantees for deciding when an ensemble is "large enough"-so that its accuracy is close to that of an ideal infinite ensemble. Due to the fact that bagging and random forests are randomized algorithms, the choice of ensemble size is closely related to the notion of "algorithmic variance" (i.e., the variance of prediction error due only to the training algorithm). In the present work, we propose a bootstrap method to estimate this variance for bagging, random forests and related methods in the context of classification. To be specific, suppose the training dataset is fixed, and let the random variable ERRt denote the prediction error of a randomized ensemble of size t. Working under a "first-order model" for randomized ensembles, we prove that the centered law of ERRt can be consistently approximated via the proposed method as t -> infinity. Meanwhile, the computational cost of the method is quite modest, by virtue of an extrapolation technique. As a consequence, the method offers a practical guideline for deciding when the algorithmic fluctuations of ERRt are negligible.
We address randomized methods for convex optimization based on generating points uniformly distributed in a convex set. We estimate the rate of convergence for such methods and demonstrate the link with the center of ...
详细信息
ISBN:
(纸本)9789955282839
We address randomized methods for convex optimization based on generating points uniformly distributed in a convex set. We estimate the rate of convergence for such methods and demonstrate the link with the center of gravity method. To implement such approach we exploit two modem Monte Carlo schemes for generating points which are approximately uniformly distributed in a given convex set. Both methods use boundary oracle to find an intersection of a ray and the set. The first method is Hit-and-Run, the second is sometimes called Shake-and-Bake. Numerical simulation results look very promising
暂无评论