There are many paradigms available to address the unique and complex problems introduced with parallel programming. These complexities have implications for computer science education as ubiquitous multi-core computer...
详细信息
There are many paradigms available to address the unique and complex problems introduced with parallel programming. These complexities have implications for computer science education as ubiquitous multi-core computers drive the need for programmers to understand parallelism. One major obstacle to student learning of parallel programming is that there is very little human factors evidence comparing the different techniques to one another, so there is no clear direction on which techniques should be taught and how. We performed a randomized controlled trial using 88 university-level computer science student participants performing three identical tasks to examine the question of whether or not there are measurable differences in programming performance between two paradigms for concurrent programming: threads compared to process-oriented programming based on Communicating Sequential Processes. We measured both time on task and programming accuracy using an automated token accuracy map (TAM) technique. Our results showed trade-offs between the paradigms using both metrics and the TAMs provided further insight about specific areas of difficulty in comprehension.
An effective data-parallel programming environment will use a variety of tools that support the development of efficient data-parallel programs while insulating the programmer from the intricacies of the explicitly pa...
详细信息
An effective data-parallel programming environment will use a variety of tools that support the development of efficient data-parallel programs while insulating the programmer from the intricacies of the explicitly parallel code.
This paper presents EasyPAP, an easy-to-use programming environment designed to help students to learn parallel programming. EasyPAPfeatures a wide range of 2D computation kernels that the students are invited to para...
详细信息
This paper presents EasyPAP, an easy-to-use programming environment designed to help students to learn parallel programming. EasyPAPfeatures a wide range of 2D computation kernels that the students are invited to parallelize using Pthreads, OpenMP, OpenCL or MPI. Execution of kernels can be interactively visualized, and powerful monitoring tools allow students to observe both the scheduling of computations and the assignment of 2D tiles to threads/processes. By focusing on algorithms and data distribution, students can experiment with diverse code variants and tune multiple parameters, resulting in richer problem exploration and faster progress towards efficient solutions. We present selected lab assignments which illustrate howEasyPAPimproves the way students explore parallel programming. (C) 2021 Elsevier Inc. All rights reserved.
The GAMMA parallel programming model is based on the multiset datastructure. Here, a succession of chemical reactions consume the elements of the multiset and produce new elements according to specific rules, This pap...
详细信息
The GAMMA parallel programming model is based on the multiset datastructure. Here, a succession of chemical reactions consume the elements of the multiset and produce new elements according to specific rules, This paper extends GAMMA model to its probabilistic version - called P-GAMMA model to realise evolutionary computations - namely probabilistic, classifier, bucket-brigade learning and genetic algorithms. We also explain how to support evolutionary computations through randomized choices and concurrent transformations on persistent globally accessible tuplespaces using query processing and transaction mechanisms.
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with consta...
详细信息
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with constant flow dependencies, that can be parallelized on SMP clusters with the aid of the tiling transformation. We investigate three parallel programming models, namely a popular message passing monolithic parallel implementation, as well as two hybrid ones, that employ both message passing and multi-threading. We conclude that the selection of an appropriate mapping topology for the mesh of processes has a significant effect on the overall performance, and provide an algorithm for the specification of such an efficient topology according to the iteration space and data dependencies of the algorithm. We also propose static load balancing techniques for the computation distribution between threads, that diminish the disadvantage of the master thread assuming all inter-process communication due to limitations often imposed by the message passing library. Both improvements are implemented as compile-time optimizations and are further experimentally evaluated. An overall comparison of the above parallel programming styles on SMP clusters based on micro-kernel experimental evaluation is further provided, as well.
In this paper, vee describe an undergraduate parallel programming course based upon networked workstations, The course is offered on the North Carolina Research and Education Network (NC-REN), a private telecommunicat...
详细信息
In this paper, vee describe an undergraduate parallel programming course based upon networked workstations, The course is offered on the North Carolina Research and Education Network (NC-REN), a private telecommunications network which interconnects universities in North Carolina and provides multiway, face-to-face video, anti audio communications. Course materials are described and made available in a new textbook, Topics are divided into basic techniques and applications. In addition, extensive home page materials are described.
SKiPPER is a Skeleton-based parallel programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the appli...
详细信息
SKiPPER is a Skeleton-based parallel programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.
This paper describes an experience of designing and implementing a portal to support transparent remote access to super-computing facilities to students enrolled in an undergraduate parallel programming course. As the...
详细信息
This paper describes an experience of designing and implementing a portal to support transparent remote access to super-computing facilities to students enrolled in an undergraduate parallel programming course. As these facilities are heterogeneous, are located at different sites, and belong to different institutions, grid computing technologies have been used to overcome these issues. The result is a grid portal based on a modular and easily extensible software architecture that provides a uniform and user-friendly interface for students to work on their programming laboratory assignments.
The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and d...
详细信息
The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in Fortran D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers.
This paper describes Stardust, an environment for parallel programming on networks of heterogeneous machines. Stardust runs on distributed memory multicomputers and networks of workstations. Applications using Stardus...
详细信息
This paper describes Stardust, an environment for parallel programming on networks of heterogeneous machines. Stardust runs on distributed memory multicomputers and networks of workstations. Applications using Stardust can communicate both through message-passing and through distributed shared memory. Stardust includes a mechanism for application reconfiguration. This mechanism is used to balance the load of the machines hosting the application, as well as for tolerating machine restarts (anticipated or not). At reconfiguration time, application processes can migrate between heterogeneous machines and the number of application processes can vary (increase or decrease) depending on the available resources. Stardust is currently implemented on a heterogeneous system including an Intel Paragon running Mach/OSF1 and a set of Pentiums running Chorus/classiX. The paper details the design and implementation of Stardust, as well as its performance. (C) 1997 Academic Press.
暂无评论