This paper proposes an architecture for designing fault-tolerant distributed object systems. The proposed arChateCtUTe attempts to bring advances in client-server, remote procedure call, reliable group communication, ...
详细信息
The proceedings contain 23 papers. The special focus in this conference is on Discrete Algorithms, Programming Environments and Implementations. The topics include: parallel mesh generation;efficient massively paralle...
ISBN:
(纸本)3540631380
The proceedings contain 23 papers. The special focus in this conference is on Discrete Algorithms, Programming Environments and Implementations. The topics include: parallel mesh generation;efficient massively parallel quicksort;practical parallel list ranking;on computing all maximal cliques distributedly;a probabilistic model for best-first search BandB algorithms;programming irregular parallel applications in cilk;a variant of the biconjugate gradient method suitable for massively parallelcomputing;efficient implementation of the improved quasi-minimal residual method on massively distributed memory computers;programming with shared data abstractions;supporting run-time parallelization of DO-ACROSS loops on general networks of workstations;engineering diffusive load balancing algorithms using experiments;comparative study of static scheduling with task duplication for distributed systems;a new approximation algorithm for the register allocation problem;improving cache performance through tiling and data alignment;a support for non-uniform parallel loops and its application to a flame simulation code;performance otimization of combined variable-cost computations and I/O;parallel software caches;communication efficient parallel searching;parallel sparse cholesky factorization and unstructured graph partitioning for sparse linear system solving.
In this paper, we describe distributed algorithms for combinational fault simulation assuming the classical stuck-at fault model. Our algorithms have been implemented on a network of Sun workstations under the Paralle...
详细信息
ISBN:
(纸本)0818677554
In this paper, we describe distributed algorithms for combinational fault simulation assuming the classical stuck-at fault model. Our algorithms have been implemented on a network of Sun workstations under the parallel Virtual Machine (PVM) environment. Two techniques are used for subdividing work among processors - test set partition and fault set partition. The sequential algorithm for fault simulation, used on individual nodes of the network, is based on a novel path compression technique proposed in this paper. We describe experimental results on a number of ISCAS '85 benchmark circuits.
The k shortest loopless paths problem is a significant combinatorial problem which arises in many contexts. When the size of the networks is very large the exact algorithms fail to find the best solution in a reasonab...
详细信息
The k shortest loopless paths problem is a significant combinatorial problem which arises in many contexts. When the size of the networks is very large the exact algorithms fail to find the best solution in a reasonable time. The aim of this paper is to suggest parallel efficient algorithms to obtain a good approximation of the solution to the k shortest loopless paths problem between two arbitrary nodes, when the network size is large. The heuristic used is known in literature as Simulated Annealing. Preliminary tests have been conducted for evaluating the validity of the proposed algorithms. The quality of the obtained results represents a significant base for further experimentations.
The N-way sharing of DB2 data bases, which became possible with general release of DB2 Version 4, is one of the architectural foundations required to fully exploit parallel Sysplex. Implementing this architecture is a...
详细信息
The N-way sharing of DB2 data bases, which became possible with general release of DB2 Version 4, is one of the architectural foundations required to fully exploit parallel Sysplex. Implementing this architecture is a complex, multifaceted process. Fundamental yet incremental improvements are required, such as those provided by the new Type II index facility for DB2 Version 4. The Type II index facility opens the door to other DB2 Version 4 enhancements, including parallel query processing, improved partition independence, row-level locking, and read-through locks. However, as you might expect, there are some tradeoffs.
From the Publisher: The ICA PP-97 proceedings comprises a well defined set of S papers in the area of parallel processing. Specific topics covered in the S include: basic issues of Algorithms and Architectures for Pa...
ISBN:
(纸本)9780780342293
From the Publisher: The ICA PP-97 proceedings comprises a well defined set of S papers in the area of parallel processing. Specific topics covered in the S include: basic issues of Algorithms and Architectures for parallel Processing; parallel Processing Prospects; routing in parallel Computer Systems; special-purpose parallel Architectures; operating Environments; scheduling; parallelisation and parallelising Computers; computing on Clusters of Workstations; parallel Algorithms; parallel Applications; parallel Algorithms and Architectures for Neural Program; databases and parallel Processing. Held in Melbourne, Australia, this important conference brought together developers and researchers from universities, industry and government to advance the level of knowledge for parallel and distributed systems and processing.
distributedcomputing systems contain frequently large numbers of idle workstations. Batch job scheduling systems exist that assign jobs to idle workstations and control the job execution in a way that interactive use...
详细信息
distributedcomputing systems contain frequently large numbers of idle workstations. Batch job scheduling systems exist that assign jobs to idle workstations and control the job execution in a way that interactive users are impacted as little as possible. However, when workstations are used for interactive as well as batch loads, compromises have to be made. Either interactive users are favored resulting in a reduction of the available CPU cycles for batch job processing, or batch jobs are prioritized which may disturb interactive users. By analyzing the usage records of an interactively used workstation cluster of more than 100 machines, we could quantify the loss of available CPU cycles for batch job processing against the collisions with interactive users. We observed many unused resources even during working-hours. Due to the fact that in this environment the resources available for batch job processing are changing dynamically and cannot be predicted exactly, current scheduling algorithms often deliver bad job turnaround times. We have developed a novel batch job scheduling algorithm which considers the usage history of individual workstations and delivers better job turnaround times. A newly built tool allows to evaluate the performance of different scheduling algorithms under the same environment conditions.
Rapid advances in high performance computing (HPC) and the Internet are heralding a paradigm shift to network-based scientific software servers, libraries, repositories and problem solving environments. According to t...
详细信息
The integration of stiff, very coupled, ordinary differential equations which describe pollutant chemical reactions is the most intensive computational task of photochemical models, since it requires at least 70% of t...
详细信息
The integration of stiff, very coupled, ordinary differential equations which describe pollutant chemical reactions is the most intensive computational task of photochemical models, since it requires at least 70% of the total CPU time. As a consequence of the local nature of these equations, the integration can be performed very efficiently by a SIMD architecture. In this work we present the porting of QSSA (Quasi Steady State Approximation) chemical solver of CALGRID photochemical model on the SIMD massively parallel platform Quadrics/APE100 which offers, in QH4 configuration, a peack performance of 25 Gflops.
The MetaCenter 3-year project was launched in 1996 as a part of the TEN-34 CZ activities of the Czech Republic. Its main goal is to create a MetaComputer whose nodes will be computers at the largest computing centers ...
详细信息
ISBN:
(纸本)0780343182
The MetaCenter 3-year project was launched in 1996 as a part of the TEN-34 CZ activities of the Czech Republic. Its main goal is to create a MetaComputer whose nodes will be computers at the largest computing centers of the Czech ATM network bandwidth allocation protocols to support unhindered execution of parallel tasks communicating via the ATM backbone. Effects of network latency will also be investigated. Personalized computing environment may be freely moved with researchers, increasing both their mobility and the efficiency of computing resources utilization. Also, the metacomputing paradigm increases efficient use of computers and installed software.
暂无评论