This paper describes the DOSMOS(1) parallelprogramming environment. Based on a DSM layer, this system has been specially designed to ensure scalability and efficiency. Several novel features are introduced as the gro...
详细信息
ISBN:
(纸本)0780335295
This paper describes the DOSMOS(1) parallelprogramming environment. Based on a DSM layer, this system has been specially designed to ensure scalability and efficiency. Several novel features are introduced as the grouping of processes, the possibility of mixing message-passing (PVM) code and DSM code, the definition of optimized weak consistency protocols, the integration of monitoring facilities. First experiments on networks of workstations show the effectiveness of these features.
Run-time work distribution in parallelprogramming systems is usually accomplished through the use of dynamic scheduling heuristics. Their sensitivity to run-time information such as global work-load, task granularity...
详细信息
Run-time work distribution in parallelprogramming systems is usually accomplished through the use of dynamic scheduling heuristics. Their sensitivity to run-time information such as global work-load, task granularity, data dependencies, locality of information, among others, is essential when trying to optimize performance. Adaptive schedulers that base their decisions on feed-back from the system are therefore of special importance. We have developed and used a general purpose parallelprogramming system, the pSystem, that also served as a test-bed environment on which we have experimented and studied the performance of distinct scheduling heuristics. Currently, we have two versions of the system: one based on Unix processes;and the other on Solaris threads. Threads (particularly user-level threads) are usually associated with low execution overheads, since they require minimal interaction with the operating system kernel This suggests that lower grain parallelism may be more effectively exploited with a thread-based parallelprogramming system. Performance analysis of both implementations over a Set of well known benchmarks, with various schedulers, shows that threads scale better under higher system loads and/or when the granularity of the tasks being executed is below a given threshold value. This paper starts with a description of the design and implementation of the pSystem computational model, followed by a detailed description of several experiments and the analysis of their results. (C) 1997 John Wiley & Sons, Ltd.
Run-time work distribution in parallelprogramming systems is usually accomplished through the use of dynamic scheduling heuristics. Their sensitivity to run-time information such as global work-load, task granularity...
详细信息
暂无评论