The paging problem is defined as follows: we are given a two-level memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At...
详细信息
The paging problem is defined as follows: we are given a two-level memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p, a miss occurs if p is not present in the fast memory. In response to a miss, we need to choose an item q in the cache and replace it by p. The choice of q needs to be made on-line, without the knowledge of future requests. The objective is to design a replacement strategy with a small number of misses. In this paper we use competitive analysis to study the performance of randomized on-line paging algorithms. Our goal is to show how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can also be applied, in a systematic fashion, to the randomized case. We present two results: we first show that the: competitive ratio of the marking algorithm is exactly 2H(k) - 1. Previously, it was known to be between H-k and 2(Hk). Then we provide a new, H-k-competitive algorithm for paging. Our algorithm, as well as its analysis, is simpler than the known algorithm by McGeoch and Sleator. Another advantage of our algorithm is that it can be implemented with complexity bounds independent of the number of past requests: O(k(2) log k) memory and O(k(2)) time per request. (C) 2000 Elsevier Science B.V. All rights reserved.
An efficient randomized online algorithm for the paging problem for cache size 2 is given, which is 3/2-competitive against an oblivious adversary. The algorithm keeps track of at most one page in slow memory at any t...
详细信息
An efficient randomized online algorithm for the paging problem for cache size 2 is given, which is 3/2-competitive against an oblivious adversary. The algorithm keeps track of at most one page in slow memory at any time. A lower bound of 37/24 approximate to 1.5416 is given for the competitiveness of any trackless online algorithm for the same problem, i.e., an algorithm that keeps track of no page outside the cache. (C) 2000 Elsevier Science B.V. All rights reserved.
Splay trees are self-organizing binary search trees that were introduced by Sleator and Tarjan [J. ACM 32 (1985) 652-686]. In this paper we present a randomized variant of these trees. The new algorithm for reorganizi...
详细信息
Splay trees are self-organizing binary search trees that were introduced by Sleator and Tarjan [J. ACM 32 (1985) 652-686]. In this paper we present a randomized variant of these trees. The new algorithm for reorganizing the tree is both simple and easy to implement. We prove that our randomized splaying scheme has the same asymptotic performance as the original deterministic scheme but improves constants in the expected running time. This is interesting in practice because the search time in splay trees is typically higher than the search time in skip lists and AVL-trees. We present a detailed experimental study of our algorithm. On request sequences generated by fixed probability distributions, we can achieve improvements of up to 25% over deterministic splaying. On request sequences that exhibit high locality of reference, the improvements are minor. (C) 2002 Elsevier Science B.V. All rights reserved.
The mechanisms used to improve the reliability of distributed systems often limit performance and scalability. Focusing on one widely-used definition of reliability, we explore the origins of this phenomenon and concl...
详细信息
The mechanisms used to improve the reliability of distributed systems often limit performance and scalability. Focusing on one widely-used definition of reliability, we explore the origins of this phenomenon and conclude that it reflects a tradeoff arising deep within the typical protocol stack. Specifically, we suggest that protocol designs often disregard the high cost of infrequent events. When a distributed system is scaled, both the frequency and the overall cost of such events often grow with the size of the system. This triggers an O(n(2)) phenomenon, which becomes visible above some threshold sizes. Our findings suggest that it would be more effective to construct large-scale reliable systems where, unlike traditional protocol stacks, lower layers use randomized mechanisms, with probabilistic guarantees, to overcome low-probability events. Reliability and other end-to-end properties are introduced closer to the application. We employ a back-of-the-envelope analysis to quantify this phenomenon for a class of strongly reliable multicast problems. We construct a non-traditional stack, as described above, that implements virtually synchronous multicast. Experimental results reveal that virtual synchrony over a non-traditional, probabilistic stack helps break through the scalability barrier faced by traditional implementations of the protocol. Copyright (C) 2002 John Wiley Sons, Ltd.
We consider a special case of the weighted caching problem A here the weight of emery page is either 1 or some fixed number M > 1. We present a randomized algorithm which achieves a competitive ratio which is O(log...
详细信息
We consider a special case of the weighted caching problem A here the weight of emery page is either 1 or some fixed number M > 1. We present a randomized algorithm which achieves a competitive ratio which is O(log k) where k is the number of pages which can fit in the cache.
We present the first explicit, and currently simplest, randomized algorithm for two-process wait-free test-and-set. It is implemented with two 4-valued single writer single reader atomic variables. A test-and-set take...
详细信息
We present the first explicit, and currently simplest, randomized algorithm for two-process wait-free test-and-set. It is implemented with two 4-valued single writer single reader atomic variables. A test-and-set takes at most 11 expected elementary steps, while a reset takes exactly 1 elementary step. Based on a finite-state analysis, the proofs of correctness and expected length are compressed into one table.
We further develop the study of testing graph properties as initiated by Goldreich, Goldwasser and Ron. Loosely speaking, given an oracle access to a graph, we wish to distinguish the case when the graph has a pre-det...
详细信息
We further develop the study of testing graph properties as initiated by Goldreich, Goldwasser and Ron. Loosely speaking, given an oracle access to a graph, we wish to distinguish the case when the graph has a pre-determined property from the case when it is "far" from having this property. Whereas they view graphs as represented by their adjacency matrix and measure the distance between graphs as a fraction of all possible vertex pairs, we view graphs as represented by bounded-length incidence lists and measure the distance between graphs as a fraction of the maximum possible number of edges. Thus, while the previous model is most appropriate for the study of dense graphs, our model is most appropriate for the study of bounded-degree graphs. In particular, we present randomized algorithms for testing whether an unknown bounded-degree graph is connected, k-connected (for k > 1), cycle-free and Eulerian. Our algorithms work in time polynomial in Ile, always accept the graph when it has the tested property, and reject with high probability if the graph is E-far from having the property. For example, the 2-connectivity algorithm rejects (with high probability) any N-vertex d-degree graph for which more than epsilon dN edges need to be added in order to make the graph 2-edge-connected. In addition we prove lower bounds of Omega(rootN) on the query complexity of testing algorithms for the bipartite and expander properties.
Computing the Delaunay triangulation of n points requires usually a minimum of 2 (n log n) operations, but in some special cases where some additional knowledge is provided, faster algorithms can be designed. Given tw...
详细信息
Computing the Delaunay triangulation of n points requires usually a minimum of 2 (n log n) operations, but in some special cases where some additional knowledge is provided, faster algorithms can be designed. Given two sets of points, we prove that, if the Delaunay triangulation of all the points is known, the Delaunay triangulation of each set can be computed in randomized expected linear time.
The objective of this paper is twofold. First, the problem of generation of real random matrix samples with uniform distribution in structured (spectral) norm bounded sets is studied. This includes an analysis of the ...
详细信息
The objective of this paper is twofold. First, the problem of generation of real random matrix samples with uniform distribution in structured (spectral) norm bounded sets is studied. This includes an analysis of the distribution of the singular values of uniformly distributed real matrices, and an efficient (i.e. polynomial-time) algorithm for their generation. Second, it is shown how the developed techniques may be used to solve in a probabilistic setting several hard problems involving systems subject to real structured uncertainty. (C) 2002 Elsevier Science Ltd. All rights reserved.
We introduce a new class of dynamic graph algorithms called quasi-fully dynamic algorithms, which are much more general than backtracking algorithms and are much simpler than fully dynamic algorithms. These algorithms...
详细信息
We introduce a new class of dynamic graph algorithms called quasi-fully dynamic algorithms, which are much more general than backtracking algorithms and are much simpler than fully dynamic algorithms. These algorithms are especially suitable for applications in which a certain core connected portion of the graph remains fixed, and fully dynamic updates occur on the remaining edges in the graph, We present very simple quasi-fully dynamic algorithms with O(log n) worst-case time per operation for edge connectivity and O(log n) amortized time per operation for cycle equivalence, The former is deterministic while the latter is Monte-Carlo-type randomized. For 2-vertex connectivity, we give a deterministic quasi-fully dynamic algorithm with O(log(3)n) amortized time per operation. The quasi-fully dynamic algorithm we present for cycle equivalence (which has several applications in optimizing compilers) is of special interest since the algorithm is quite simple, and no special-purpose incremental or backtracking algorithm is known for this problem.
暂无评论