We have developed an instant environment for protein sequence profile scanning application by re-mastering Knoppix distribution as high throughput computing edition. This is an integrated environment with bioinformati...
详细信息
ISBN:
(纸本)9780780390379
We have developed an instant environment for protein sequence profile scanning application by re-mastering Knoppix distribution as high throughput computing edition. This is an integrated environment with bioinformatics applications, parallel file system and scheduler. The integrated environment allows biology researchers to perform scanning applications immediately on their computational pool without any system configuration changed.
Many researches have been devoted to designing appropriate concurrency control algorithms for real-time database systems, which not only satisfy consistency requirements but also meet transaction timing constraints as...
详细信息
ISBN:
(纸本)0769521320
Many researches have been devoted to designing appropriate concurrency control algorithms for real-time database systems, which not only satisfy consistency requirements but also meet transaction timing constraints as much as possible. Optimistic concurrency control protocols have the nice properties of being non-blocking and deadlock-free, but they have the problems of late conflict detection and transaction restarts. Although the number of transaction restarts is reduced by dynamic adjustment of serialization order (DASO) in real-time database systems, it still has some unnecessary transaction restarts. In this paper, we first propose a new method called dynamic adjustment of execution order (DAEO) and a new optimistic concurrency control algorithm based on DAEO, which can reduce the number of unnecessary restarts near to zero and outperforms the previous algorithms, and then discuss the experiments and the results.
As the first geographically distributed supercomputer on the Top 500 list, the AVIDD facility of Indiana University ranked 50th in June of 2003. It achieved 1.169 tera-flops running the UNPACK benchmark. In this paper...
详细信息
ISBN:
(纸本)0769521320
As the first geographically distributed supercomputer on the Top 500 list, the AVIDD facility of Indiana University ranked 50th in June of 2003. It achieved 1.169 tera-flops running the UNPACK benchmark. In this paper, our work of improving UNPACK performance is reported, and the impact of math kernel, UNPACK problem size and network tuning is analyzed based on the performance model of LINPACK.
Data mining is an application that is commonly executed on massively parallelsystems, often using clusters with hundreds of processors. With a disk-based data store, however, the data must first be delivered to the p...
详细信息
ISBN:
(纸本)0769521320
Data mining is an application that is commonly executed on massively parallelsystems, often using clusters with hundreds of processors. With a disk-based data store, however, the data must first be delivered to the processors before effective mining can take place. Here, we describe the prototype of an experimental system that moves processing closer to where the data resides, on the disk, and exploits massive parallelism via reconfigurable hardware to perform the computation. The performance of the prototype is also reported.
In this paper a fast algorithm for solving a large system with an essentially Toeplitz five-band coefficient matrix is presented. The first two and last two rows are influenced by boundary conditions. The five band co...
详细信息
ISBN:
(纸本)0769521320
In this paper a fast algorithm for solving a large system with an essentially Toeplitz five-band coefficient matrix is presented. The first two and last two rows are influenced by boundary conditions. The five band core of this matrix is factored as the product of tridiagonal matri-ces for the purpose of obtaining a solution of a linear system more efficiently. An error term for the approximate solution is presented following the work by Yan and Chung [12]. An algorithm is developed for solving the two systems and is tested using two multiprocessor machines with different architectures.
Cluster computing has become a valid alternative for high performance computing. To fully exploit the computing power of these environments, one must utilize high performance network and protocol technologies, since p...
详细信息
Within the trend of object-based distributed computing, we present the design and implementation of a numerical simulation for electromagnetic waves propagation. A sequential Java design and implementation is first pr...
详细信息
ISBN:
(纸本)0769521320
Within the trend of object-based distributed computing, we present the design and implementation of a numerical simulation for electromagnetic waves propagation. A sequential Java design and implementation is first presented. Further, a distributed and parallel version is derived from the first, using an active object pattern. In addition, benchmarks are presented on this non embarrassingly parallel application. A first contribution of this paper resides in the sequential object-oriented design that proved to be very modular and extensible;the classes and abstractions are designed to allow both element and volume type methods, furthermore, valid on structured, unstructured, or hybrid meshes. Compared to a Fortran version, the performance of this highly modular version proved to be in the same range. It is also shown how smoothly the sequential version can be distributed, keeping the same structuring and object abstractions, allowing to deal with larger data size. Finally, benchmarks on up to 64 processors compare the performances with respect to sequential and parallel versions, putting that in perspective with a comparable Fortran version.
Real-time systems are finding use in complex and dynamic environments such as cruise controllers, life support systems, nuclear reactors, etc., These systems have separate components that sense, control and stabilize ...
详细信息
ISBN:
(纸本)0769521320
Real-time systems are finding use in complex and dynamic environments such as cruise controllers, life support systems, nuclear reactors, etc., These systems have separate components that sense, control and stabilize the environment towards achieving the mission or target. These consociate components synchronize, compute and control themselves locally or have a centralized component to do the above. distributed computing techniques improve the overall performance and reliability of large real-time systems with spread components. In this paper, we propose and evaluate three distributed dispatching algorithms for Partially Clairvoyant schedules. For a job set of size n, the algorithms have dispatch times of 0(1) per job. In the first algorithm, one processor executes all the jobs and other processors compute the dispatch functions. This scenario simplifies design and is better in situations where one processor controls all the devices. For the other algorithms, all the processors execute jobs assigned to them and compute the dispatch functions in a certain defined order;which is a plausible scenario in distributed controlling. We create various test-cases to test the algorithms due to the unavailability of benchmarks.
Real-time distributed applications complexity is steadily increasing. A well-known technique used to manage such complexity consists in decomposing the whole system in different quasi-independent distributed subsystem...
详细信息
ISBN:
(纸本)0769521320
Real-time distributed applications complexity is steadily increasing. A well-known technique used to manage such complexity consists in decomposing the whole system in different quasi-independent distributed subsystems. Inter-subsystem communication, when necessary, is performed via gateway nodes that filter in and outgoing traffic. For real-time systems, this architecture poses additional design challenges, since it becomes necessary to consider both intra and inter-network message exchanges with real-time constraints. In the work carried out so far, the FTT communication paradigm has been provided with tools for supporting flexible real-time communication on isolated networks. This work presents a first approach to incorporate multi-segment support into the FTT protocol family. Particularly, two approaches are presented, analyzed and compared, which allow breaking end-to-end deadlines into parameters that are local to each one of the interconnected networks.
distributed shared memory (DSM) is one of the main abstraction to implement data-centric information exchanges among a set of processes. Ensuring causal consistency means all operations executed at each process will b...
详细信息
ISBN:
(纸本)0769521320
distributed shared memory (DSM) is one of the main abstraction to implement data-centric information exchanges among a set of processes. Ensuring causal consistency means all operations executed at each process will be compliant to a cause effect relation. This paper first provides an optimality criterion for a protocol P that enforces causal consistency on a DSM. This criterion addresses the number of write operations delayed by P (write delay optimality). Then we present a protocol which is optimal with respect to write delay optimality and we show how previous protocols presented in the literature are not optimal with respect to such a criterion.
暂无评论