This paper handles the problem of designing a deterministic dictionary structure in a distributed system. The structure is required to be compact (namely, store only a single copy of each data item) and memory balance...
Complex programs are often required to display graceful degradation, reacting adaptively to changes in the environment. Under ideal circumstances, the program's behavior satisfies a set of application-dependent co...
详细信息
Complex programs are often required to display graceful degradation, reacting adaptively to changes in the environment. Under ideal circumstances, the program's behavior satisfies a set of application-dependent constraints. In the presence of events such as failures, timing anomalies, synchronization conflicts, or security breaches, certain constraints may become difficult or impossible to satisfy, and the application designer may choose to relax them as long as the resulting behavior is sufficiently "close" to the preferred behavior. This paper describes the relaxation lattice method, a new approach to specifying graceful degradation for a large class of programs. A relaxation lattice is a lattice of specifications parameterized by a set of constraints, where the stronger the set of constraints, the more restrictive the specification. While a program is able to satisfy its strongest set of constraints, it satisfies its preferred specification, but if changes to the environment force it to satisfy a weaker set, then it will permit additional "weakly consistent" computations which are undesired but tolerated. The use of relaxation lattices is illustrated by specifications for programs that tolerate 1) faults, such as site crashes and network partitions, 2) timing anomalies, such as attempting to read a value "too soon" after it was written, 3) synchronization conflicts, such as choosing the oldest "unlocked" item from a queue, and 4) security breaches, such as acquiring unauthorized capabilities. A preliminary version of this paper appeared in the proceedings of the Sixth acm SIGACT-SIGOPS symposium on principles of distributedcomputing [17].
Interaction is a fundamental notion in computation. In many cases, important properties are revealed by considering the interactive aspects of problems. In this work, we examine the power and limitations of interactio...
详细信息
Idle workstations in a network represent a significant computing potential. In particular, their processing power can be used by parallel-distributed programs that treat the network as a loosely-coupled multiprocessor...
详细信息
This conference proceedings contains 27 papers. The main subjects are shared memory multiprocessors, achievement of causal consistency, semantics of distributed services, storage management, message-passing systems, c...
详细信息
This conference proceedings contains 27 papers. The main subjects are shared memory multiprocessors, achievement of causal consistency, semantics of distributed services, storage management, message-passing systems, clock synchronization algorithms, high speed network control, analysis of communication protocols, reasoning about probabilistic algorithms, and totally asynchronous systems.
We describe the control protocols of the PARIS experimental network. This high bandwidth network for integrated communication (data, voice, video) is currently operational as a laboratory prototype. It will also be de...
详细信息
ISBN:
(纸本)9780897914048
We describe the control protocols of the PARIS experimental network. This high bandwidth network for integrated communication (data, voice, video) is currently operational as a laboratory prototype. It will also be deployed within the AURORA Testbed that is part of the NSF/DARPA Gigabit Networking program. The high bandwidth dicates the need of specialized hardware to support faster packet handling and control protocols. A new network control architecture is presented which exploits the specialized hardware in order to support the expected real time needs of future traffic. In particular, since control information can be distributed quickly, decisions can be made based upon more complete and accurate information. In some respects, this has the effect of having the benefits of centralized control (e.g. easier bandwidth resource allocation to connections), while retaining the fault-tolerance and scalability of a distributed architecture.
Processes in concurrent logic programs communicate and synchronize using shared single-assignment variables. Communication is performed via unification operations, while synchronization is performed through input matc...
详细信息
ISBN:
(纸本)9780897914048
Processes in concurrent logic programs communicate and synchronize using shared single-assignment variables. Communication is performed via unification operations, while synchronization is performed through input matching. The more expressive concurrent logic languages employ atomic unification as the basic communication primitive. Atomic unification can be thought of as an atomic transaction that either fails with no trace or succeeds in writing on all of the necessary writable variables occurring in the unification instance. This paper presents a distributed variable server algorithm that can be used as the key component in a distributed implementation of concurrent logic languages with atomic unification. The variable server provides an abstraction in which processes can act as if all of the shared variables are local. It thus allows programs to be written for a shared memory model and executed in a system in which memory is distributed. We rigorously specify the requirements the variable server must fulfill, present an algorithm satisfying the specification, and prove its correctness with respect to the specification. The algorithm has been implemented for the language Flat Concurrent Prolog (FCP) and is incorporated in the distributed version of the Logix system.
The main contribution of this paper is the Cancelback Protocol, an extension of the Time Warp mechanism that handles storage management. It includes both fossil collection, the recovery of storage for messages and sta...
详细信息
The main contribution of this paper is the Cancelback Protocol, an extension of the Time Warp mechanism that handles storage management. It includes both fossil collection, the recovery of storage for messages and states that can never again influence the computation, and cancelback, the recovery of storage assigned to messages and states at times so far in the future that their memory would be better used for more immediate purposes. It guarantees that Time Warp is optimal in its storage requirements when run in shared memory, i.e. Time Warp will successfully complete a simulation using no more space than it would take to execute the same simulation with the sequential event list algorithm. This is better by a factor of two than the only previously published result. Without this protocol (or equivalent) Time Warp's behavior can be unstable;hence it should be considered an essential part of Time Warp mechanism, rather than simply a refinement. In addition we also prove that asynchronous conservative algorithms, including all of the Chandy-Misra-Bryant (CMB) mechanisms, are not optimal;they cannot necessarily execute a simulation in the same amount of space as a sequential execution. In some cases a simulation requiring space n+k when executed sequentially might require O(nk) space when executed on n processors by CMB.
To provide high availability for services such as mail or bulletin boards, data must be replicated. One way to guarantee consistency of replicated data is to force service operations to occur in the same order at all ...
详细信息
ISBN:
(纸本)089791404X
To provide high availability for services such as mail or bulletin boards, data must be replicated. One way to guarantee consistency of replicated data is to force service operations to occur in the same order at all sites, but this approach is expensive. In this paper, we propose lazy replication as a way to preserve consistency by exploiting the semantics of the service's operations to relax the constraints on ordering. Three kinds of operations are supported: operations for which the clients define the required order dynamically during the execution, operations for which the service defines the order, and operations that must be globally ordered with respect to both client ordered and service ordered operations. The method performs well in terms of response time, amount of stored state, number of messages, and availability. It is especially well suited to applications in which most operations require only the client-defined order.
Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures f...
详细信息
暂无评论