This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unify...
详细信息
ISBN:
(纸本)9781581134858
This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unifying framework of round-by-round fault detectors. We show that any P-based t-resilient consensus algorithm requires at least t+2 rounds for a global decision even in runs that are synchronous. We then prove the bound to be tight by exhibiting a new P-based t-resilient consensus algorithm that reaches a global decision at round t + 2 in every synchronous run. Our new algorithm is in this sense significantly faster than the most efficient indulgent algorithm we knew of (which requires 2t + rounds). We contrast our lower bound with the well-known t + 1 round tight lower bound on consensus for the synchronous model, pointing out the price of indulgence.
This paper shows how to implement a trusted network file system on an untrusted server. While cryptographic storage techniques exist that allow users to keep data secret from untrusted servers, this work concentrates ...
详细信息
ISBN:
(纸本)9781581134858
This paper shows how to implement a trusted network file system on an untrusted server. While cryptographic storage techniques exist that allow users to keep data secret from untrusted servers, this work concentrates on the detection of tampering attacks and stale data. Ideally, users of an untrusted storage server would immediately and unconditionally notice any misbehavior on the part of the server. This ideal is unfortunately not achievable. However, we define a notion of data integrity called fork consistency in which, if the server delays just one user from seeing even a single change by another, the two users will never again see one another's changes-a failure easily detectable with on-line communication. We give a practical protocol for a multi-user network file system called SUNDR, and prove that SUNDR offers fork consistency whether or not the server obeys the protocol.
We present an N-process local-spin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs Θ(log N) remote memory references to enter and exit its critical section. No atomic r...
详细信息
ISBN:
(纸本)9781581134858
We present an N-process local-spin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs Θ(log N) remote memory references to enter and exit its critical section. No atomic read/write algorithm with better asymptotic worst-case time complexity is currently known. This suggests that atomic memory is not fundamentally required if one is interested in worst-case time complexity. The same cannot be said if one is interested in fast-path or adaptive algorithms. We show that such algorithms fundamentally require memory accesses to be atomic. In particular, we show that for any N-process nonatomic algorithm, there exists a single-process execution in which the lone competing process executes Ω(log N/log log N) remote operations to enter its critical section. Moreover, these operations must access Ω(√log N/log log N) distinct variables.
We consider the problem of wait-free implementation of a multi-writer snapshot object with m ≥ 2 components shared by n > m processes. It is known that this can be done using m multi-writer registers. We give a ma...
详细信息
We consider the problem of wait-free implementation of a multi-writer snapshot object with m ≥ 2 components shared by n > m processes. It is known that this can be done using m multi-writer registers. We give a matching lower bound, slightly improving the previous space lower bound. The main focus of the paper, however, is on time complexity. The best known upper bound on the number of steps a process has to take to perform one operation of the snapshot is O(n). When m is much smaller than n, an implementation whose time complexity is a function of m rather than n would be better. We show that this cannot be achieved for any space-optimal implementation: We prove that Ω(n) steps are required to perform a SCAN operation in the worst case, even if m = 2. This significantly improves previous Ω(min(m, n)) lower bounds. Our proof also yields insight into the structure of any space-optimal implementation, showing that processes simulating the snapshot operations must access the registers in a very constrained way.
Today, the utility of many replicated Internet services is limited by availability rather than raw performance. To better understand the effects of replica placement on availability, we propose the problem of minimal ...
详细信息
Today, the utility of many replicated Internet services is limited by availability rather than raw performance. To better understand the effects of replica placement on availability, we propose the problem of minimal replication cost for availability. Let replication cost be the cost associated with replica deployment, dynamic replica creation and teardown at n candidate locations. Given client access patterns, replica failure patterns, network partition patterns, a required consistency level and a target level of availability, the minimal replication cost is the lower bound on a system's replication cost. Solving this problem also answers the dual question of optimal availability given a constraint on replication cost. We design the first algorithm we are aware of to solve the problem, through reduction to integer linear programming and enumeration of pruned serialization orders. Using practical faultloads and workloads, we demonstrate that the exponential complexity of our algorithm is tractable for practical problems with hundreds of candidate locations. The lower bound computed by our algorithm is tight, but the tightness can be sacrificed by a proposed optimization for large problems. We also show that with low replica creation and teardown costs, the bound is close to tight in practical problems even with the optimization.
We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled ...
详细信息
We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.
Finding a maximal or maximum matching in a graph is a well-understood problem for which efficient sequential algorithms exist. Applications of matchings in distributed settings make it desirable to find self-stabilizi...
ISBN:
(纸本)9781581134858
Finding a maximal or maximum matching in a graph is a well-understood problem for which efficient sequential algorithms exist. Applications of matchings in distributed settings make it desirable to find self-stabilizing asynchronous distributed solutions to these problems. We first present a self-stabilizing algorithm for finding a maximal matching in a general anonymous network under read/write atomicity with linear round complexity. This is followed by a self-stabilizing algorithm, with quadratic time complexity, for finding a maximum matching in a bipartite network under composite atomicity. These results represent significant progress in the area of distributed algorithms for matchings.
Protecting agents from host attacks is a pressing security concern in networked environments supporting mobile agents. In this paper, we consider a black hole: a highly harmful host that disposes of visiting agents up...
详细信息
ISBN:
(纸本)9781581134858
Protecting agents from host attacks is a pressing security concern in networked environments supporting mobile agents. In this paper, we consider a black hole: a highly harmful host that disposes of visiting agents upon their arrival, leaving no observable trace of such a destruction. The task to identify the location of the harmful host is clearly dangerous for the searching agents. We study under what conditions and at what cost a team of autonomous asynchronous mobile agents can successfully accomplish this task; we are concerned with solutions that are generic (i.e., topology-independent). We study the size of the optimal solution (i.e., the minimum number of agents needed to locate the black hole), and the cost of the minimal solution (i.e., the number of moves performed by the agents executing a size-optimal solution protocol). We establish tight bounds on size and cost depending on the a priori knowledge the agents have about the network, and on the consistency of the local labellings. In particular, we prove that: with topological ignorance Δ + 1 agents are needed and suffice, and the cost is Θ(n2), where Δ is the maximal degree of a node and n is the number of the nodes in the network; with topological ignorance but in presence of sense of direction only two agents suffice and the cost is Θ(n2); and with complete topological knowledge only two agents suffice and the cost is Θ(n log n). All the upper-bound proofs are constructive.
The second edition of The Manual [23] begins: 'LATEX is a system for typesetting documents. Its first widely available version, mysteriously numbered 2.09, appeared in 1985.' It is too early for a complete cri...
详细信息
The second edition of The Manual [23] begins: 'LATEX is a system for typesetting documents. Its first widely available version, mysteriously numbered 2.09, appeared in 1985.' It is too early for a complete critical assessment of the impact of LATEX 2.09 because its world-wide effects on many aspects of many cultures, not least scientific publication, remain strong after 15 years - and that itself is significant in a technological world where a mere 15 months of fame can make and break an idea. Therefore this paper provides simply a review and evaluation of the relationship between TEX, LATEX and some of the major technical developments in the world of quality automated formatting since the publication of LATEX 2.09 in 1985. It is neither definitive nor comprehensive but I hope it is informative.
暂无评论