This paper considers on-line and semi-on-line scheduling problems on m parallel machines with objective to maximize the minimum load. For on-line version, we prove that algorithm Random is an optimal randomized algori...
详细信息
This paper considers on-line and semi-on-line scheduling problems on m parallel machines with objective to maximize the minimum load. For on-line version, we prove that algorithm Random is an optimal randomized algorithm on two machines, and derive a new randomized upper bound for general m machines which significantly improves the known upper bound. For semi-on-line version with non-increasing job processing times, we show that LS algorithm is an optimal deterministic algorithm for two and three machine cases. We further present an optimal randomized algorithm RLS for two machine case.
Given a digraph D=(V,A) and a set of kappa pairs of vertices in V, we are interested in finding, for each pair (x(i),y(i)), a directed path connecting x(i) to y(i) such that the set of kappa paths so found is arc-disj...
详细信息
Given a digraph D=(V,A) and a set of kappa pairs of vertices in V, we are interested in finding, for each pair (x(i),y(i)), a directed path connecting x(i) to y(i) such that the set of kappa paths so found is arc-disjoint. For arbitrary graphs the problem is NP-complete, even for kappa=2. We present a polynomial time randomized algorithm for finding arc-disjoint paths in an r-regular expander digraph D. We show that if D has sufficiently strong expansion properties and the degree r is sufficiently large, then all sets of kappa=Omega (n/log n) pairs of vertices can be joined. This is within a constant factor of best possible.
The proliferation of video content on the Web makes similarity detection an indispensable tool in Web data management, searching, and navigation. In this paper, we propose a number of algorithms to efficiently measure...
详细信息
The proliferation of video content on the Web makes similarity detection an indispensable tool in Web data management, searching, and navigation. In this paper, we propose a number of algorithms to efficiently measure video similarity. We define video as a set of frames, which are represented as high dimensional vectors in a feature space. Our goal is to measure ideal video similarity (IVS), defined as the percentage of clusters of similar frames shared between two video sequences. Since IVS is too complex to be deployed in large database applications, we approximate it with Voronoi video similarity (VVS), defined as the volume of the intersection between Voronoi cells of similar clusters. We propose a class of randomized algorithms to estimate VVS by first summarizing each video with a small set of its sampled frames, called the video signature (ViSig), and then calculating the distances between corresponding frames from the two ViSigs. By generating samples with a probability distribution that describes the video statistics, and ranking them based upon their likelihood of making an error in the estimation, we show analytically that ViSig can provide an unbiased estimate of IVS. Experimental results on a large dataset of Web video and a set of MPEG-7 test sequences with artificially generated similar versions are provided to demonstrate the retrieval performance of our proposed techniques.
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on eac...
详细信息
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We solve this problem for arbitrary access patterns by randomly mapping blocks of a logical address space to the disks. We show that a shared buffer of O(D) blocks suffices to support efficient writing. The analysis uses the properties of negative association to handle dependencies between the random variables involved. This approach might be of independent interest for probabilistic analysis in general. If two randomly allocated copies of each block exist, N arbitrary blocks can be read within [N/D] + 1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1 + 1/r for any integer r without a big impact on reading efficiency. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "single-disk multi-head" model [1] that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model [2] with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added.
We consider the problem of scheduling a collection of dynamically arriving jobs with unknown execution times so as to minimize the average flow time. This is the classic CPU scheduling problem faced by time-sharing op...
详细信息
We consider the problem of scheduling a collection of dynamically arriving jobs with unknown execution times so as to minimize the average flow time. This is the classic CPU scheduling problem faced by time-sharing operating systems where preemption is allowed. It is easy to see that every algorithm that doesn't unnecessarily idle the processor is at worst n-competitive, where n is the number of jobs. Yet there was no known nonclairvoyant algorithm, deterministic or randomized, with a competitive ratio provably O(n(1-epsilon)). In this article, we give a randomized nonclairvoyant algorithm, RMLF, that has competitive ratio O(log n log log n) against an oblivious adversary. RMLF is a slight variation of the multilevel feedback (MLF) algorithm used by the UNIX operating system, further justifying the adoption of this algorithm. It is known that every randomized nonclairvoyant algorithm is Omega(log n)-competitive, and that every deterministic nonclairvoyant algorithm is Omega(n(1/3))-competitive.
A probabilistic strategy, early termination, enables different interpolation algorithms to adapt to the degree or the number of terms in the target polynomial when neither is supplied in the input. In addition to dens...
详细信息
A probabilistic strategy, early termination, enables different interpolation algorithms to adapt to the degree or the number of terms in the target polynomial when neither is supplied in the input. In addition to dense algorithms, we implement this strategy in sparse interpolation algorithms. Based on early termination, racing algorithms execute simultaneously dense and sparse algorithms. The racing algorithms can be embedded as the univariate interpolation substep within Zippel's multivariate method. In addition, we experimentally verify some heuristics of early termination, which make use of thresholds and post-verification. (C) 2003 Elsevier Ltd. All rights reserved.
This paper presents a new approximate algorithm Nested Queue-Jumping algorithm (NQJA) to solve traveling salesman problem. The proposed algorithm incorporates the thoughts of heuristic algorithm, randomized algorithm ...
详细信息
ISBN:
(纸本)9539676967
This paper presents a new approximate algorithm Nested Queue-Jumping algorithm (NQJA) to solve traveling salesman problem. The proposed algorithm incorporates the thoughts of heuristic algorithm, randomized algorithm and local optimization. Numerical results show that to the small-scale instances, using Queue-Jumping algorithm (QJA) directly can obtain the known optimal solution with a large probability. In the case of large-scale instances, NQJA generates high-quality solution compared to well know heuristic methods. Moreover, the shortest tour to China 144 TSP found by NQJA is shorter than the known optimal tour. It can be a very promising alternative for finding a solution to the TSP. NQJA is specially devised for TSP, But its thought can give elicitation for other NP-hard combinatorial optimization problems.
We prove lower bounds on the competitive ratio of randomized algorithms for several on-line scheduling problems. The main result is a bound of e/(e - 1) for the on-line problem with objective minimizing the sum of com...
详细信息
We prove lower bounds on the competitive ratio of randomized algorithms for several on-line scheduling problems. The main result is a bound of e/(e - 1) for the on-line problem with objective minimizing the sum of completion times of jobs that arrive over time at their release times and are to be processed on a single machine, This lower bound shows that a randomized algorithm designed in Chekuri et al. (Proceedings of the Eighth ACM-SIAM Symposium on Discrete algorithms, 1997, 609 -618) is a best possible randomized algorithm for this problem. (C) 2002 Elsevier Science B.V. All rights reserved.
We present a new polynomial-time randomized algorithm for discovering affine equalities involving variables in a program. The key idea of the algorithm is to execute a code fragment on a few random inputs, but in such...
详细信息
We present a new polynomial-time randomized algorithm for discovering affine equalities involving variables in a program. The key idea of the algorithm is to execute a code fragment on a few random inputs, but in such a way that all paths are covered on each run. This makes it possible to rule out invalid relationships even with very few runs. The algorithm is based on two main techniques. First, both branches of a conditional are executed on each run and at joint points we perform an affine combination of the joining states. Secondly, in the branches of an equality conditional we adjust the data values on the fly to reflect the truth value of the guarding boolean expression. This increases the number of affine equalities that the analysis discovers. The algorithm is simpler to implement than alternative deterministic versions, has better computational complexity, and has an extremely small probability of error for even a small number of runs. This algorithm is an example of how randomization can provide a trade-off between the cost and complexity of program analysis, and a small probability of unsoundness.
The problem of factoring a polynomial in a single or severalvariables over a finite field, the rational numbers or the complexnumbers is one of the success stories in the discipline of symboliccomputation. In the earl...
详细信息
ISBN:
(纸本)9781581136418
The problem of factoring a polynomial in a single or severalvariables over a finite field, the rational numbers or the complexnumbers is one of the success stories in the discipline of symboliccomputation. In the early 1960s implementors investigated theconstructive methods known from classical algebra books, but--withthe exception of Gauss's distinct degree factorizationalgorithm--found the algorithms quite inefficient in practice [16].The contributions in algorithmic techniques that have been madeover the next 40 years are truly a hallmark of symbolic *** early pioneers, Berlekamp, Musser, Wang, Weinberger,Zassenhaus and others applied new ideas like randomization, thateven before the now famous algorithms for primality testing byRabin and Strassen, and like generic programming with coefficientdomains as abstract data classes, and they introduced the powerfulHensel lifting lemma to computer algebra. We note that whilede-randomization for integer primality testing has beenaccomplished recently [1], the same remains open for the problem ofcomputing a root of a polynomial modulo a large prime [12, ResearchProblem 14.48].Polynomial-time complexity for rational coefficients wasestablished in the early 1980s by the now-famous lattice basisreduction algorithm of A. Lenstra, H. W. Lenstra, Jr., and ***ász. The case of many variables first became anapplication of the DeMillo and Lipton/Schwartz/Zippel lemma [30]and then triggered a fundamental generalization from the standardsparse (distributed) representation of polynomials to the one bystraight line and black box programs [11, 17, 19]. Effectiveversions of the Hilbert irreducibility theorem are needed for theprobabilistic analysis, which serendipitously later have alsoplayed a role in the PCP characterization of NP[2]. Unlike many other problems in commutative algebra andalgebraic geometry, such as algebraic system solving, thepolynomial factoring problem is of probabilistic polynomial-timeco
暂无评论