Collaboration is a key factor in successful knowledge management. Recently, wikis have become a popular solution for distributed and collaborative knowledge management. However, most wikis do not appropriately support...
详细信息
The lag of parallel programming models and languages behind the advance of heterogeneous many-core processors has left a gap between the computational capability of modern systems and the ability of applications to ex...
详细信息
ISBN:
(纸本)9781424464432
The lag of parallel programming models and languages behind the advance of heterogeneous many-core processors has left a gap between the computational capability of modern systems and the ability of applications to exploit them. Emerging programming models, such as CUDA and OpenCL, force developers to explicitly partition applications into components (kernels) and assign them to accelerators in order to utilize them effectively. An accelerator is a processor with a different ISA and micro-architecture than the main CPU. These static partitioning schemes are effective when targeting a system with only a single accelerator. However, they are not robust to changes in the number of accelerators or the performance characteristics of future generations of accelerators. In previous work, we presented the Harmony execution model for computing on heterogeneous systems with several CPUs and accelerators. In this paper, we extend Harmony to target systems with multiple accelerators using control speculation to expose parallelism. We refer to this technique as Kernel Level Speculation (KLS). We argue that dynamic parallelization techniques such as KLS are sufficient to scale applications across several accelerators based on the intuition that there will be fewer distinct accelerators than cores within each accelerator. In this paper, we use a complete prototype of the Harmony runtime that we developed to explore the design decisions and trade-offs in the implementation of KLS. We show that KLS improves parallelism to a sufficient degree while retaining a sequential programming model. We accomplish this by demonstrating good scaling of KLS on a highly heterogeneous system with three distinct accelerator types and ten processors.
The article offers original approach which is called Controller Agent for Constraints Satisfaction (CACS). That approach combines multi-agent architecture with constraint solvers in the unified framework which express...
详细信息
ISBN:
(纸本)1902956966
The article offers original approach which is called Controller Agent for Constraints Satisfaction (CACS). That approach combines multi-agent architecture with constraint solvers in the unified framework which expresses major features of Swarm Intelligence approach and replaces traditional stochastic adaptation of the swarm of the autonomous agents by constraint-driven adaptation. We describe major theoretic, methodological and softwareengineering principles of composition of constraints and agents in the framework of one multi-agent system, as well as application of our approach for modelling of particular logistic problem.
Service Level Agreements (SLAs) play an important role in guaranteeing successful collaborations among autonomous entities in Internet-based Virtual Computing Environment (iVCE). However, traditional static and predef...
详细信息
Cloud computing is a way of computing, via the Internet, that broadly shares computer resources instead of using software or storage on a local PC. Cloud computing is an outgrowth of the ease-of-access to remote compu...
详细信息
Cloud computing is the "new hot" topic in IT. It combines the maturity of Web technologies (networking, APIs, semantic Web 2.0, languages, protocols and standards such as WSDL, SOAP, REST, WS-BPEL, WS-CDL, I...
详细信息
ISBN:
(纸本)9781424469871
Cloud computing is the "new hot" topic in IT. It combines the maturity of Web technologies (networking, APIs, semantic Web 2.0, languages, protocols and standards such as WSDL, SOAP, REST, WS-BPEL, WS-CDL, IPSEC, etc.), the robustness of geographically distributed computing paradigm (Network, Internet and Grid computing) and self-management capabilities (Autonomic computing), with the capacity to manage quality of services by monitoring, metering, quantifying and billing computing resources and costs (Utility computing). Those have made possible and cost-effective for businesses, small and large, to completely host data- and application-centers virtually... in the Cloud. Our idea of Cloud proposes a new dimension of computing, in which everyone, from single users to communities and enterprises, can, on one hand, share resources and services in a transparent way and, on the other hand, have access to and use such resources and services adaptively to their requirements. Such an enhanced concept of Cloud, enriching the original one with Volunteer computing and interoperability challenges, has been proposed and synthesized in Cloud@Home. The complex infrastructure implementing Cloud@Home has to be supported by an adequate distributed middleware able to manage it. In order to develop such a complex distributedsoftware, in this paper we apply softwareengineering principles such as rigor, separation of concerns and modularity. Our idea is, starting from a softwareengineering approach, to identify and separate concerns and tasks, and then to provide both the software middleware architecture and the hardware infrastructure following the hw/sw co-design technique widely used in embedded systems. In this way we want to primarily identify and specify the Cloud@Home middleware architecture and its deployment into a feasible infrastructure;secondly, we want to propose the development process we follow, based on hardware/software co-design, in distributed computing contexts, de
distributed embedded systems usually have strict quality requirements which need to be verified, and the allocation of software to hardware needs to be considered throughout the whole development process. In this pape...
详细信息
ISBN:
(纸本)9789532900217
distributed embedded systems usually have strict quality requirements which need to be verified, and the allocation of software to hardware needs to be considered throughout the whole development process. In this paper, we present usage of simulation in analyzing development process of embedded systems using CARMA principle which combines two paradigms - Component-Based softwareengineering and Model Driven Development. In development process two types of parallel modeling of hardware are distinguished: modeling of a virtual hardware structure and of the physical structure. Two kinds of verification activities are introduced: milestone verification, which integrates the product requirements into the process through frequent analyses and measurements of the development artifacts, and exploratory analysis, which is informal and carried out by individual developers, similar to debugging. Based on the process model, we have constructed a queuing network model, used for simulations to further explore the characteristics of the development model. Initial simulations give by hand that by increasing the amount of analysis and verification in a project, more errors are found with the same amount of effort and time. There are also dependencies on project risks and the strength of the analysis tool support.
The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becomin...
详细信息
There is building interest in using FPGAs as accelerators for high-performance computing, but existing systems for programming them are so far inadequate. In this paper we propose a soft processor programming model an...
详细信息
Dynamic Binary Translation(DBT) is widely used, but it suffers from substantial overhead. Several methods are taken to improve its performance, such as linking/chaining, building superblock according to profiling and/...
详细信息
暂无评论