NASA Technical Reports Server (Ntrs) 20000068916: Portable parallel programming for the Dynamic Load Balancing of Unstructured Grid Applications by NASA Technical Reports Server (Ntrs); published by
NASA Technical Reports Server (Ntrs) 20000068916: Portable parallel programming for the Dynamic Load Balancing of Unstructured Grid Applications by NASA Technical Reports Server (Ntrs); published by
In this paper, bye present a new powerful method for parallel program representation called Data Driven Graph (DDG). DDG takes all advantages of classical Directed Acyclic graph (DAC) and adds much more. Simple defini...
详细信息
ISBN:
(纸本)0769505007
In this paper, bye present a new powerful method for parallel program representation called Data Driven Graph (DDG). DDG takes all advantages of classical Directed Acyclic graph (DAC) and adds much more. Simple definition, flexibility and ability to represent loops and dynamically created tasks. With DDG, scheduling becomes an efficient tool for increasing performance of parallel systems. DDG is not only a parallel program model, it also initiates a new parallel programming style, allows programmer to write a parallel program with minimal difficulty. We also present our parallel program development tool with support for DDG and scheduling.
This paper investigates the sample weighting effect on Genetic parallel programming (GPP) that evolves parallel programs to solve the training samples captured directly from a real-world system. The distribution of th...
详细信息
NASA Technical Reports Server (Ntrs) 20030025348: Architecture-Adaptive Computing Environment: a Tool for Teaching parallel programming by NASA Technical Reports Server (Ntrs); published by
NASA Technical Reports Server (Ntrs) 20030025348: Architecture-Adaptive Computing Environment: a Tool for Teaching parallel programming by NASA Technical Reports Server (Ntrs); published by
This paper presents the research project based methodology of teaching parallel programming to master's students in a High Performance Computing program. The requirements for completing a master's degree state...
详细信息
This paper presents the research project based methodology of teaching parallel programming to master's students in a High Performance Computing program. The requirements for completing a master's degree state that all students should be able to develop computer simulation programs using parallel and distributed computing technologies, regardless of students' background and their preferences for in-depth study of high or low-level programming, administration, and information security. Creating computer simulations based on high-performance computing is impossible without the experience of solving such key issues of low-level parallel programming as the data flow management, synchronization, load balancing and fault tolerance. We believe that the best way to explore these issues is phased implementation of appropriate algorithms in the application, and then carrying out computational experiments. Therefore, as a main tool for the practical study, we offer the implementation of special project tasks. While developing the course tasks, we have used not only our teaching experience of parallel programming for undergraduate and graduate students, but we also relied on the existing practice of the development of distributed computing systems. In addition to the classic tasks, students explored pairing algorithms, load balancing and fault tolerance through implementation in distributed applications and testing in computational experiments. Our experience has shown that this approach to teaching parallel programming, which includes modeling and simulations, enabled students to proceed gradually from classic tasks to the implementation of full-scale research projects.
This paper provides a review of contemporary methodologies and APIs for parallel programming, with representative technologies selected in terms of target system type (shared memory, distributed, and hybrid), communic...
详细信息
This paper provides a review of contemporary methodologies and APIs for parallel programming, with representative technologies selected in terms of target system type (shared memory, distributed, and hybrid), communication patterns (one-sided and two-sided), and programming abstraction level. We analyze representatives in terms of many aspects including programming model, languages, supported platforms, license, optimization goals, ease of programming, debugging, deployment, portability, level of parallelism, constructs enabling parallelism and synchronization, features introduced in recent versions indicating trends, support for hybridity in parallel execution, and disadvantages. Such detailed analysis has led us to the identification of trends in high-performance computing and of the challenges to be addressed in the near future. It can help to shape future versions of programming standards, select technologies best matching programmers' needs, and avoid potential difficulties while using high-performance computing systems.
The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the...
详细信息
The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners' experimentation with the provided programming models, obtaining learners' competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI) and OpenMP parallelization tools have been chosen for implementation.
Resource Leveling Problem (RLP) is solved by heuristic, meta-heuristic, and mathematical methods. However, the aforementioned methods cannot guarantee the exact solution for large size problems. In this study, number ...
详细信息
Resource Leveling Problem (RLP) is solved by heuristic, meta-heuristic, and mathematical methods. However, the aforementioned methods cannot guarantee the exact solution for large size problems. In this study, number of feasible schedules which can be obtained by delaying the non-critical activities without violating the precedence relationships and elongating the project completion time are computed. All of the feasible schedules which can be defined as the search domain are enumerated and the guaranteed optimum solution for the RLP is obtained by a different method from the existing methods. Exponential equation between the search domain and the number of activities on serial path is derived and the insolvability of large RLP in a reasonable time by one central processing unit is verified. Partitioning of the problem into equal sizes is provided by parallel programming so that each particle contains the same number of enumeration. In this study, four RLP in which the largest problem has 36 activities are solved by exhaustive enumeration within reasonable solution time and it is proved that the proposed method is applicable. Exact solutions of larger problems can also be obtained by the proposed method if the problem is partitioned into smaller sizes.
Java's support for parallel and distributed processing makes the language attractive for metacomputing applications, such as parallel applications that run on geographically distributed (wide-area) systems. To obt...
详细信息
Java's support for parallel and distributed processing makes the language attractive for metacomputing applications, such as parallel applications that run on geographically distributed (wide-area) systems. To obtain actual experience with a Java-centric approach to metacomputing, we have built and used a highperformance wide-area Java system, called Manta, Manta implements the Java Remote Method Invocation (RMI) model using different communication protocols (active messages and TCP/IP) for different networks. The paper shows how wide-area parallel applications can be expressed and optimized using Java RMI, Also, it presents performance results of several applications on a wide-area system consisting of four Myrinet-based clusters connected by ATM WANs, We finally discuss alternative programming models, namely object replication, JavaSpaces, and MPI for Java, Copyright (C) 2000 John Wiley & Sons, Ltd.
暂无评论