Concurrency and parallelism have long been viewed as important, but somewhat distinct concepts. While concurrency is extensively used to amortize latency (for example, in web- and database-servers, user interfaces, et...
详细信息
Concurrency and parallelism have long been viewed as important, but somewhat distinct concepts. While concurrency is extensively used to amortize latency (for example, in web- and database-servers, user interfaces, etc.), parallelism is traditionally used to enhance performance through execution on multiple functional units. Motivated by an evolving application mix and trends in hardware architecture, there has been a push toward integrating traditional programming models for concurrency and parallelism. Use of conventional threads APIs (POSIX, OpenMP) with messaging libraries (MPI), however, leads to significant programmability concerns, owing primarily to their disparate programming models. In this paper, we describe a novel API and associated runtime for concurrent programming, called MPI Threads (MPIT), which provides a portable and reliable abstraction of low-level threading facilities. We describe various design decisions in MPIT, their underlying motivation, and associated semantics. We provide performance measurements for our prototype implementation to quantify overheads associated with various operations. Finally, we discuss two real-world use cases: an asynchronous message queue and a parallel information retrieval system. We demonstrate that MPIT provides a versatile, low overhead programming model that can be leveraged to program large parallel ensembles.
Nowadays, concurrent programs are an inevitable part of many software applications. They can increase the computation performance of the applications by parallelizing their computations. One of the approaches to reali...
详细信息
Nowadays, concurrent programs are an inevitable part of many software applications. They can increase the computation performance of the applications by parallelizing their computations. One of the approaches to realize the concurrency is using multi thread programming. However, these systems are structurally complex considering the control of the parallelism (such as thread synchronization and resource control) and also considering the interaction between their components. So, the design of these systems can be difficult and their implementation can be error-prone especially when the addressed system is big and complex. On the other hand, a Domain-specific Modeling Language (DSML) is one of the Model Driven Development (MDD) approaches which tackles this problem. Since DSMLs provide a higher abstraction level, they can lead to reduce the complexities of the concurrent programs. With increasing the abstraction level and generating the artifacts automatically, the performance of developing the software (both in design and implementation phases) is increased, and the efficiency is raised by reducing the probability of occurring errors. Thus, in this paper, a DSML is proposed for concurrent programs, called DSML4CP, to work in a higher level of abstraction than code level. To this end, the concepts of concurrent programs and their relationships are presented in a metamodel. The proposed metamodel provides a context for defining abstract syntax, and concrete syntax of the DSML4CP. This new language is supported by a graphical modeling tool which can visualize different instance models for domain problems. In order to clarify the expressions of the language;the static semantic controls are realized in the form of constraints. Finally, the architectural code generation is fulfilled via model transformation rules using the templates of the concurrent programs. To increase level of the DSML's leverage and to demonstrate the general support of concurrent programming by the D
C++ and concurrent C are both upward-compatible supersets of C that provide data abstraction and parallel programming facilities, respectively. Although data abstraction facilities are important for writing concurrent...
详细信息
C++ and concurrent C are both upward-compatible supersets of C that provide data abstraction and parallel programming facilities, respectively. Although data abstraction facilities are important for writing concurrent programs, we did not provide data abstraction facilities in concurrent C because we did not want to duplicate the C++ research effort. Instead, we decided that we would eventually integrate C++ and concurrent C facilities to produce a language with both data abstraction and parallel programming facilities, namely, concurrent C++. Data abstraction and parallel programming facilities are orthogonal. Despite this, the merger of concurrent C and C++ raised several integration issues. In this paper, we will give introductions to C++ and concurrent C, give two examples illustrating the advantages of using data abstraction facilities in concurrent programs, and discuss issues in integrating C++ and concurrent C to produce concurrent C++.
AbstractThis paper demonstrates the use of the SR concurrent programming language for discrete event simulation. SR provides a rich collection of synchronization mechanisms, whose use can lead to programs that are sim...
详细信息
AbstractThis paper demonstrates the use of the SR concurrent programming language for discrete event simulation. SR provides a rich collection of synchronization mechanisms, whose use can lead to programs that are simpler and more efficient than those constrained to employ only one synchronization mechanism. Several SR solutions to a simulation problem are presented and contrasted with an Ada solution to the same problem. The paper also introduces a technique that exploits asynchronous message passing to program concise solutions to several problems involving lists. In the context of the simulation problem, this technique is used to manage the event list and the list of blocked processes. The technique can also be applied to several other concurrent programming problems. The results of this paper should be of interest both to programmers using concurrent programming languages and to language designers.
Fine-grained lock is frequently used to mitigate lock contention in the multithreaded program running on a shared-memory multicore processor. However, a concurrent program based on the fine-grained lock is hard to wri...
详细信息
Fine-grained lock is frequently used to mitigate lock contention in the multithreaded program running on a shared-memory multicore processor. However, a concurrent program based on the fine-grained lock is hard to write, especially for beginners in the concurrent programming course. How to help participants learn fine-grained lock has become increasingly important and urgent. To this end, this paper presents a novel refactoring-based approach to enhance the learning effectiveness of fine-grained locks. Two refactoring tools are introduced to provide illustrating examples for participants by converting original coarse-grained locks into fine-grained ones automatically. Learning effectiveness and limitations are discussed when refactoring tools are applied. We evaluate students' outcomes with two benchmarks and compare their performance in Fall 2018 with those in Fall 2019. We also conduct experiments on students' outcomes by dividing them into two groups (A and B) in a controlled classroom where participants in group A learn the fine-grained locks with the help of refactoring tools while those in group B do not access these tools. Evaluation of the results when they have been taught with the refactoring-based approach reveals a significant improvement in the students' learning.
concurrent programming can be applied to the problem of computer graphic simulation of radiation treatment of tumors (radiation treatment planning). Running several tasks or programs simultaneously on behalf of a sing...
详细信息
concurrent programming can be applied to the problem of computer graphic simulation of radiation treatment of tumors (radiation treatment planning). Running several tasks or programs simultaneously on behalf of a single user provides a big improvement over the traditional sequential approach, in which editing a treatment plan and computing and displaying dose distributions are separate operations which must be invoked by explicit commands. With our system, the user sees isodose contours being updated automatically and continuously as the plan is edited;this greatly facilitates plant optimization. The complexity of parallel processing has resulted in a ''conventional wisdom'' which discourages this technique. The usual approach is to have parallel processes share a common global data structure, which makes interaction hard to control and discourages modularity and data abstraction. We have developed an alternative approach based on message streams which instead enhances modularity and data abstraction while still providing the advantages of parallel processing. The system is very reliable and is used routinely in a practical clinical environment.
In Pettorossi and Skowron (1983) a recursive-equations language is introduced. Its operational semantics is specified by means of computing agents which communicate and exchange messages. Those communications are, so ...
详细信息
In Pettorossi and Skowron (1983) a recursive-equations language is introduced. Its operational semantics is specified by means of computing agents which communicate and exchange messages. Those communications are, so to speak, zero-order, in the sense that the exchanged messages are values of a data structure, possibly defined by the programmer.
In this paper we extend that approach and we consider also ‘higher-order’ communications by allowing the exchange of agents behaviours, i.e. sets of computations, among computing agents. This extension leads to a new programming methodology which makes use of proofs of computing agents behaviours and their related strategies.
C is a well-known language for systems programming in UNIX-systems. Its concepts are very efficient rather than very safe and, therefore, an extension of C for concurrent programming has also to focus on an efficient ...
详细信息
C is a well-known language for systems programming in UNIX-systems. Its concepts are very efficient rather than very safe and, therefore, an extension of C for concurrent programming has also to focus on an efficient implementation instead of on very safe programming concepts. We will present processes, modified ports, and modified signals as concepts for extending C. These concepts are defined close to hardware structures as mailboxes and interrupts and, therefore, they can be implemented efficiently. On the other hand we will show that many classical concepts of concurrent programming can be simulated by ports and signals and, therefore, these primitives are sufficiently powerful.
AbstractSINA is an object‐oriented language for distributed and concurrent programming. The primary focus of this paper is on the object‐oriented concurrent programming mechanisms of SINA and their implementation. T...
详细信息
AbstractSINA is an object‐oriented language for distributed and concurrent programming. The primary focus of this paper is on the object‐oriented concurrent programming mechanisms of SINA and their implementation. This paper presents the SINA constructs for concurrent programming and inter‐object communication, some illustrative examples and a message‐based implementation model for SINA that we have used in our current impleme
The MODEL approach, which allows human designers to have a nonprocedural implementation-independent view of a concurrent system, is described. Designers specify the problem to be solved by representing design concepts...
详细信息
The MODEL approach, which allows human designers to have a nonprocedural implementation-independent view of a concurrent system, is described. Designers specify the problem to be solved by representing design concepts by variables and composing equations that define the variables. They partition the overall problem into modules that are candidates for being computed concurrently, each module consisting of a subset of equations. The translation from the specification into a respective computation by an object computer architecture is performed by language processors, and this methodology supports independence in specifying and testing individual modules. Automatic implementation of a specified system is performed by the MODEL system on 2 levels. On the global level, the Configurator accepts as input a graph of the network of subsystems, modules, files, and their interconnections. On the local level, the Compiler accepts as input an individual module specification. The overall methodology is described through the dining philosophers example, which represents resource allocation.
暂无评论