The fitness-level method, also called the method of f-based partitions, is an intuitive and widely used technique for the running time analysis of randomized search heuristics. It was originally defined to prove upper...
详细信息
The fitness-level method, also called the method of f-based partitions, is an intuitive and widely used technique for the running time analysis of randomized search heuristics. It was originally defined to prove upper and lower bounds on the expected running time. Recently, upper tail bounds were added to the technique;however, these tail bounds only apply to running times that are at least twice as large as the expectation. We remove this restriction and supplement the fitness-level method with sharp tail bounds, including lower tails. As an exemplary application, we prove that the running time of randomized local search on ONEMAX is sharply concentrated around n In n - 0.1159...n. (C) 2013 Elsevier B.V. All rights reserved.
Integer multiplication as one of the basic arithmetic functions has been in the focus of several complexity theoretical investigations and ordered binary decision diagrams (OBDDs) are one of the most common dynamic da...
详细信息
Integer multiplication as one of the basic arithmetic functions has been in the focus of several complexity theoretical investigations and ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for Boolean functions. Only in 2008 the question whether the deterministic OBDD complexity of the most significant bit of integer multiplication is exponential has been answered affirmatively. Since probabilistic methods have turned out to be useful in almost all areas of computer science, one may ask whether randomization can help to represent the most significant bit of integer multiplication in smaller size. Here, it is proved that the randomized OBDD complexity is also exponential. (C) 2010 Elsevier B.V. All rights reserved.
We present a randomized parallel algorithm that computes the greatest common divisor of two integers of n bits in length with probability 1-o(1) that takes O(n log log n/log n) time using O (n(6+epsilon)) processors f...
详细信息
We present a randomized parallel algorithm that computes the greatest common divisor of two integers of n bits in length with probability 1-o(1) that takes O(n log log n/log n) time using O (n(6+epsilon)) processors for any epsilon > 0 on the EREW PRAM parallel model of computation. The algorithm either gives a correct answer or reports failure. We believe this to be the first randomized sublinear time algorithm on the EREW PRAM for this problem. (C) 2009 Elsevier B.V. All rights reserved.
Several randomized algorithms make use of convolution to estimate the score vector of matches between a text string of length N and a pattern string of length M, i.e., the vector obtained when the pattern is slid alon...
详细信息
Several randomized algorithms make use of convolution to estimate the score vector of matches between a text string of length N and a pattern string of length M, i.e., the vector obtained when the pattern is slid along the text, and the number of matches is counted for each position. These algorithms run in deterministic time O (kN log M), and find an unbiased estimator of the scores whose variance is (M - c)(2)/k where c is the actual score;here k is an adjustable parameter that provides a tradeoff between computation time and lower variance. This paper provides an algorithm that also runs in deterministic time O (kN log M) but achieves a lower variance of min(M/k, M - c)(M - c)/k. For all score values c that are less than M - (M/k), our variance is essentially a factor of k smaller than in previous work, and for M - (M/k) < c <= M it matches the previous work's variance. As in the previous work, our estimator is unbiased, and we make no assumption about the probabilistic characteristics of the input, or about the size of the alphabet, and our solution extends to string matching with classes, class complements, "never match" and "always match" symbols, to the weighted case and to higher dimensions. (C) 2013 Elsevier B.V. All rights reserved.
Circle detection is fundamental in pattern recognition and computer vision. The randomized approach has received much attention for its computational benefit when compared with the Hough transform. In this paper, a mu...
详细信息
Circle detection is fundamental in pattern recognition and computer vision. The randomized approach has received much attention for its computational benefit when compared with the Hough transform. In this paper, a multiple-evidence-based sampling strategy is proposed to speed up the randomized approach. Next, an efficient refinement strategy is proposed to improve the accuracy. Based on different kinds of ten test images, experimental results demonstrate the computation-saving and accuracy effects when plugging the proposed strategies into three existing circle detection methods. (C) 2011 Elsevier Ltd. All rights reserved.
All known fast randomized Byzantine Agreement (BA) protocols have (rare) infinite runs. We present a method of combining a randomized BA protocol of a certain class with any deterministic BA protocol to obtain a rando...
详细信息
All known fast randomized Byzantine Agreement (BA) protocols have (rare) infinite runs. We present a method of combining a randomized BA protocol of a certain class with any deterministic BA protocol to obtain a randomized protocol that preserves the expected average complexity of the randomized protocol while guaranteeing termination in all runs. In particular, we obtain a randomized BA protocol with constant expected time, which always terminates within t + O(log t) rounds, where t = O(n) is the number of faulty processors.
In this letter, the problem of reducing the computational complexity of a recently developed data-driven predictive control scheme is considered. For this purpose, a randomized data compression technique is proposed, ...
详细信息
In this letter, the problem of reducing the computational complexity of a recently developed data-driven predictive control scheme is considered. For this purpose, a randomized data compression technique is proposed, which makes the dimension of the decision variable independent of the recorded data size, thereby reducing the complexity of the online optimization problems in data-driven predictive control to that of classical model-based predictive control. The proposed method outperforms other competing complexity reduction schemes in benchmark tests, while guaranteeing similar control performance and stability properties.
We design new deterministic and randomized algorithms for computational problems in free solvable groups. In particular, we prove that the word problem and the power problem can be solved in quasi-linear time and the ...
详细信息
We design new deterministic and randomized algorithms for computational problems in free solvable groups. In particular, we prove that the word problem and the power problem can be solved in quasi-linear time and the conjugacy problem can be solved in quasi-quartic time by Monte Carlo type algorithms. (c) 2014 Elsevier Inc. All rights reserved.
When randomized ensembles such as bagging or random forests are used for binary classification, the prediction error of the ensemble tends to decrease and stabilize as the number of classifiers increases. However, the...
详细信息
When randomized ensembles such as bagging or random forests are used for binary classification, the prediction error of the ensemble tends to decrease and stabilize as the number of classifiers increases. However, the precise relationship between prediction error and ensemble size is unknown in practice. In the standard case when classifiers are aggregated by majority vote, the present work offers a way to quantify this convergence in terms of "algorithmic variance," i.e. the variance of prediction error due only to the randomized training algorithm. Specifically, we study a theoretical upper bound on this variance, and show that it is sharp - in the sense that it is attained by a specific family of randomized classifiers. Next, we address the problem of estimating the unknown value of the bound, which leads to a unique twist on the classical problem of non-parametric density estimation. In particular, we develop an estimator for the bound and show that its MSE matches optimal non-parametric rates under certain conditions. (Concurrent with this work, some closely related results have also been considered in Cannings and Samworth (2017) and Lopes (2019).) (C) 2019 Elsevier B.V. All rights reserved.
The use of randomization in online multiprocessor scheduling is studied. The problem of scheduling independent jobs on tn machines online originates with Graham [16]. While the deterministic case of this problem has b...
详细信息
The use of randomization in online multiprocessor scheduling is studied. The problem of scheduling independent jobs on tn machines online originates with Graham [16]. While the deterministic case of this problem has been studied extensively, little work has been done on the randomized case. For m = 2 a randomized 4/3-competitive algorithm was found by Bartal et al. A randomized algorithm for m greater than or equal to 3 is presented. It achieves competitive ratios of 1.55665, 1.65888, 1.73376, 1.78295, and 1.81681, for m = 3, 4, 5, 6, 7, respectively. These competitive ratios are less than the best deterministic lower bound for m = 3, 4, 5 and less than the best known deterministic competitive ratio for m = 3, 4, 5, 6, 7. Two models of multiprocessor scheduling with rejection are studied. The first is the model of Bartal et al. Two randomized algorithms for this model are presented. The first algorithm performs well for small m, achieving competitive ratios of 3/2, (7 + root 129)/10 < 1.83579, (5 + 2 root 22)/7 < 2.05441 for m = 2, 3, 4, respectively. The second algorithm outperforms the first for m greater than or equal to 5. It beats the deterministic algorithm of Bartal et al. for all m = 5,..., 1000. It is conjectured that this is true for all m. The second model differs in that preemption is allowed. For this model, randomized algorithms are presented which outperform the best deterministic algorithm for small m.
暂无评论