In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing commu...
详细信息
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many hpc programmers, perhaps due to the scarcity of full-scale scientific java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to java. Both of these applications are parallelized using our thread-safe java messaging system-MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the java and C versions of these two scientific applications, and demonstrate that the java codes can achieve performance comparable with legacy applications written in conventional hpc languages. Copyright (C) 2009 John Wiley & Sons, Ltd.
In the past years, multi-core processors and clusters of multi-core processors have emerged to be promising approaches to meet the growing demand for computing performance. They deliver scalable performance, certainly...
详细信息
In the past years, multi-core processors and clusters of multi-core processors have emerged to be promising approaches to meet the growing demand for computing performance. They deliver scalable performance, certainly at the costs of tedious and complex parallel programming. Due to a lack of high-level abstractions, developers of parallel applications have to deal with low-level details such as coordinating threads or synchronizing processes. Thus, parallel programming still remains a dificult and error-prone task. In order to shield the programmer from these low–level details, algorithmic skeletons have been proposed. They encapsulate typical parallel programming patterns and have emerged to be an effcient and scalable approach to simplifying the development of parallel applications. In this paper, we present a java binding of our skeleton library Muesli. We point out strengths and weaknesses of java with respect to parallel and distributed computing. A matrix multiplication benchmark demonstrates that the java Generics deliver poor performance, thus the java implementation is unable to compete with the C++ implementation in terms of performance.
暂无评论