this paper describes a new scheme for guaranteeing that transactions in a client/server system observe consistent state while they are running. the scheme is presented in conjunction with an optimistic concurrency con...
详细信息
ISBN:
(纸本)9780897919524
this paper describes a new scheme for guaranteeing that transactions in a client/server system observe consistent state while they are running. the scheme is presented in conjunction with an optimistic concurrency control algorithm, but could also be used to prevent read-only transactions from conflicting with read/write transactions in a multi-version system. the scheme is lazy about the consistency it provides for running transactions and also in the way it generates the consistency information. the paper presents results of simulation experiments showing that the cost of the scheme is negligible. the scheme uses multipart timestamps to inform nodes about information they need to know. Today the utility of such schemes is limited because timestamp size is proportional to system size and therefore the schemes don't scale to very large systems. We show how to solve this problem. Our multipart timestamps are based on real rather than logical clocks;we assume clocks in the system are loosely synchronized. Clocks allow us to keep multipart timestamps small with minimal impact on performance: we remove old information that is likely to be known while retaining recent information. Only performance and not correctness is affected if clocks get out of synch.
Lack of object code compatibility in VLIW architectures is a severe limit to their adoption as a general-purpose computing paradigm. Previous approaches include hardware and software techniques, both of which have dra...
详细信息
Lack of object code compatibility in VLIW architectures is a severe limit to their adoption as a general-purpose computing paradigm. Previous approaches include hardware and software techniques, both of which have drawbacks. Hardware techniques add to the complexity of the architecture, whereas software techniques require multiple executables. this paper presents a technique called Dynamic Rescheduling that applies software techniques dynamically, using intervention by the OS: at each first-time page fault, the page of code is rescheduled for the new generation, if required. Results are presented to demonstrate the viability of the technique using the Illinois IMPACT-compiler and the TINKER architectural framework. For the machine models and the workloads used in this study, performance of the rescheduled code compares well withthe native scheduled code for a machine. the behavior of a subset of programs in the workload is such that they face a large number of first-time page faults. Due to this, their rescheduling overhead is higher relative to their total execution time. Such programs are called high-overhead programs. Caching of translated pages across multiple invocations of the program to reduce the rescheduling overhead, using a persistent rescheduled-page cache (PRC)((1)) is dis cussed. It was found that for the workload used in this evaluation, a PRC of size between 512 to 1024 pages, and which uses an overhead-based page replacement policy would be effective in reducing the overhead.
computer modeling provides a tool to assess the contribution of sub-cellular structure and physiology to normal and pathological cardiac conduction. Previous simulations have been too computationally demanding to mode...
详细信息
computer modeling provides a tool to assess the contribution of sub-cellular structure and physiology to normal and pathological cardiac conduction. Previous simulations have been too computationally demanding to model macroscopic tissue sections while retaining the sub-cellular resolution of interest. A new method for numerical simulations of cardiac electrical activity was developed which allows larger and more complex models to be examined. Sub-cellular resolution and physiology is incorporated into the cable model of cardiac tissue, which assumes a network of passive resistances linking ionic and capacitive membrane elements. this method adapts the model for three dimensional (3D) simulations in highperformance parallel or vector computing environments. Our data demonstrates the numerical convergence of the technique, and validation with experiment and previous computer models.
Consensus protocols are used in a variety of reliable distributed systems, including both safety-critical and business-critical applications. the correctness of a consensus protocol is usually shown, by making assumpt...
详细信息
Consensus protocols are used in a variety of reliable distributed systems, including both safety-critical and business-critical applications. the correctness of a consensus protocol is usually shown, by making assumptions about the environment in which it executes, and then proving properties about the protocol. But proofs about a protocol's behavior are only as good as the assumptions which were made to obtain them, and violation of these assumptions can lead to unpredicted and serious consequences. We present a new approach for the probabilistic verification of synchronous round based consensus protocols. In doing so, we make stochastic assumptions about the environment in which a protocol operates, and derive probabilities of proper and non proper behavior. We thus can account for the violation of assumptions made in traditional proof techniques. To obtain the desired probabilities, the approach enumerates possible states that can be reached during an execution of the protocol, and computes the probability of achieving the desired properties for a given fault and network environment. We illustrate the use of this approach via the evaluation of a simple consensus protocol operating under a realistic environment which includes performance, omission, and crash failures.
Clusters of workstations, connected by a fast network, are emerging as a viable architecture for building high-throughput fault-tolerant servers. this type of architecture is more scalable and more cost-effective than...
详细信息
Clusters of workstations, connected by a fast network, are emerging as a viable architecture for building high-throughput fault-tolerant servers. this type of architecture is more scalable and more cost-effective than a tightly coupled multiprocessor and may achieve as good a throughput. We explore several combinations of fault tolerance (FT) and load-balancing (LB) schemes, and compare their impact on the maximum throughput achievable by the system, and on its survivability. In particular, we show that the FT scheme has an effect on the throughput of the system, while the LB scheme affects the ability of the system to override failures. We study the scalability of the different schemes under different loads and failure conditions. Our simulations take into consideration the overhead of each scheme, the network contention, and the resource loads.
the large design space of modern computerarchitectures calls for performance modelling tools to facilitate the evaluation of different alternatives. In this paper we give an overview of the Mermaid multicomputer simu...
详细信息
the large design space of modern computerarchitectures calls for performance modelling tools to facilitate the evaluation of different alternatives. In this paper we give an overview of the Mermaid multicomputer simulation environment. this environment allows the evaluation of a wide range of architectural design tradeoffs while delivering reasonable simulation performance. To achieve this, simulation takes place at a level of abstract machine instructions rather than at the level of real instructions. Moreover, a less detailed mode of simulation is also provided. So when accuracy is not the primary objective, this simulation mode can yield high simulation efficiency. As a consequence, Mermaid makes both fast prototyping and accurate evaluation of multicomputerarchitectures feasible.
the goal of the RaPiD (Reconfigurable Pipelined Datapath) architecture is to provide highperformance configurable computing for a range of computationally-intensive applications that demand special-purpose hardware. ...
详细信息
the goal of the RaPiD (Reconfigurable Pipelined Datapath) architecture is to provide highperformance configurable computing for a range of computationally-intensive applications that demand special-purpose hardware. this is accomplished by mapping the computation into a deep pipeline using a configurable array of coarse-grained computational units. A key feature of RaPiD is the combination of static and dynamic control. While the underlying computational pipelines are configured statically, a limited amount of dynamic control is provided which greatly increases the range and capability of applications that can be mapped to RaPiD. this paper illustrates this mapping and configuration for several important applications including a FIR filter, 2-D DCT, motion estimation, and parametric curve generation; it also shows how static and dynamic control are used to perform complex computations.
暂无评论