Several hard problems have to be addressed in order to parallelize image analysis algorithms. Indeed, at the region level, these algorithms handle irregular (and sometimes strongly dynamic) data-structures. Moreover, ...
详细信息
Several hard problems have to be addressed in order to parallelize image analysis algorithms. Indeed, at the region level, these algorithms handle irregular (and sometimes strongly dynamic) data-structures. Moreover, they often lead to an unbalanced amount of computations, which is quite impossible to foresee offline. This paper focus on the parallelization of the ANET image analysis programming environment. Thanks to graph related data structures and efficient computing primitives, ANET allows rapid image algorithm prototyping. But in return, these primitives are difficult to parallelize. We present a solution for powerful implicit parallelization of the ANET environment, without any change in the application programming interface. The ANET API is summarized and illustrated with some examples. Several parallelization experimentations are reported. The solution we propose is detailed, and results are given on complete image analysis applications. ANET appears as a powerful environment, both for its expressiveness that allows rapid prototyping and for its implicit parallelization that allows good computation time.
View-Oriented parallel programming is based on Distributed Shared Memory which is friendly and easy for programmers to use. It requires the programmer to divide shared data into views according to the memory access pa...
详细信息
View-Oriented parallel programming is based on Distributed Shared Memory which is friendly and easy for programmers to use. It requires the programmer to divide shared data into views according to the memory access pattern of the parallel algorithm. One of the advantages of this programming style is that it offers the performance potential for the underlying Distributed Shared Memory system to optimize consistency maintenance. Also it allows the programmer to participate in performance optimization of a program through wise partitioning of the shared data into views. In this paper, we compare the performance of View- Oriented parallel programming against Message Passing Interface. Our experimental results demonstrate a performance gap between View-Oriented parallel programming and Message Passing Interface. The contributing overheads behind the performance gap are discussed and analyzed, which sheds much light on further performance improvement of View-Oriented parallel programming. Key Words: Distributed Shared Memory, View-based Consistency, View-Oriented parallel programming, Cluster Computing, Message Passing Interface
This paper presents an environment for supporting parallel/distributed programming using Java with RMI and RMI-IIOP (CORBA). The environment implements the notion of Shared Objects (SO), Distributed Shared Objects (DS...
详细信息
ISBN:
(纸本)0769516688
This paper presents an environment for supporting parallel/distributed programming using Java with RMI and RMI-IIOP (CORBA). The environment implements the notion of Shared Objects (SO), Distributed Shared Objects (DSO) and introduces Active Shared Mobile Objects (ASMO). The Environment of Shared Objects for Web (ESOW) provides an environment to passive objects (lists, queues and stacks) and active objects (process). This environment simulates a Distributed Shared Memory (DSM) with security, good performance (load balancing), reliability (fault tolerance) and transparency,. It makes possible that computers share two "abundant" resources in the Web: memory and processor. Then the Web is seen as a Pool of Processors for sharing. We will present a test with evolution strategies to solve multimodal functions with many local minimum. We confirm also that others tests have validated the use of ESOW in Metacomputing.
The authors discuss methods for expressing and tuning the performance of parallel programs, using two programming models in the same program: distributed and shared memory. Such methods are important for anyone who us...
详细信息
The authors discuss methods for expressing and tuning the performance of parallel programs, using two programming models in the same program: distributed and shared memory. Such methods are important for anyone who uses these large machines for parallel programs as well as for those who study combinations of the two programming models.
This paper presents an environment for supporting parallel/distributed programming using Java with RMI and RMI-IIOP (CORBA). The environment implements the notion of shared objects (SO), distributed shared objects (DS...
详细信息
This paper presents an environment for supporting parallel/distributed programming using Java with RMI and RMI-IIOP (CORBA). The environment implements the notion of shared objects (SO), distributed shared objects (DSO) and introduces active shared mobile objects (ASMO). The Environment of Shared Objects for Web (ESOW) provides an environment to passive objects (lists, queues and stacks) and active objects (process). This environment simulates a distributed shared memory (DSM) with security, good performance (load balancing), reliability (fault tolerance) and transparency. It makes it possible for computers to share two "abundant" resources on the Web: memory and processor. Then the Web is seen as a pool of processors for sharing. We present a test with evolution strategies to solve multimodal functions with many local minima. We confirm also that other tests have validated the use of ESOW in metacomputing.
A design pattern is a description of a high-quality solution to a frequently occurring problem in some domain. A pattern language is a collection of design patterns that are carefully organized to embody a design meth...
详细信息
This paper presents the parallel implementation of a boundary element code for the solution of 2D elastostatic problems using linear elements. The original code is described in detail in a reference text in the area [...
详细信息
This paper presents the parallel implementation of a boundary element code for the solution of 2D elastostatic problems using linear elements. The original code is described in detail in a reference text in the area [Boundary elements techniques: theory and applications in engineering, 1984]. The Fortran code is reviewed and rewritten to run on shared and distributed memory systems using standard and portable libraries: OpenMP, LAPACK and ScaLAPACK. The implementation process provides guidelines to develop parallel applications of the Boundary Element Method, applicable to many science and engineering problems. Numerical experiments on a SGI Origin 2000 shows the effectiveness of the proposed approach. (C) 2004 Elsevier Ltd. All rights reserved.
Large-scale parallelized distributed computing has been implemented in the message passing interface (MPI) environment to solve numerically, eight reaction-diffusion equations representing the anatomy and treatment of...
详细信息
Large-scale parallelized distributed computing has been implemented in the message passing interface (MPI) environment to solve numerically, eight reaction-diffusion equations representing the anatomy and treatment of breast cancer. The numerical algorithm is perturbed functional iterations (PFI) which is completely matrix-free. Fully distributed computations with multiple processors have been implemented on a large scale in the serial PFI-code in the MPI environment. The technique of implementation is general and can be applied to any serial code. This has been validated by comparing the computed results from the serial code and those from the MPI-version of the parallel code.
暂无评论