Static program analysis is widely used in various application areas to solve many practical problems. Although researchers have made significant achievements in static analysis, it is still too challenging to perform ...
详细信息
ISBN:
(纸本)9781728112466
Static program analysis is widely used in various application areas to solve many practical problems. Although researchers have made significant achievements in static analysis, it is still too challenging to perform sophisticated interprocedural analysis on large-scale modern software. The underlying reason is that interprocedural analysis for large-scale modern software is highly computation- and memory-intensive, leading to poor scalability. We aim to tackle the scalability problem by proposing a novel big data solution for sophisticated static analysis. Specifically, we propose a data-parallel algorithm and a join-process-filter computation model for the CFL-reachability based interprocedural analysis and develop an efficient distributed static analysis engine in the cloud, called BigSpa. Our experiments validated that BigSpa running on a cluster scales greatly to perform precise interprocedural analyses on millions of lines of code, and runs an order of magnitude or more faster than the existing state-of-the-art analysis tools.
We consider the closed nesting and checkpointing model for transactions in fault-tolerant distributed transactional memory (DTM). The closed nested model allows inner-nested transactions to be aborted (in the event of...
详细信息
ISBN:
(纸本)9780769549712
We consider the closed nesting and checkpointing model for transactions in fault-tolerant distributed transactional memory (DTM). The closed nested model allows inner-nested transactions to be aborted (in the event of a transactional conflict) without aborting the parent transaction, while checkpointing allows transactions to rollback to a previous execution state, potentially improving concurrency over flat nesting. We consider a quorum-based replicated model for fault-tolerant DTM, and present algorithms to support closed nesting and checkpointing. The algorithms use incremental validation to avoid communication overhead on commit, and ensure 1-copy equivalence. Our experimental studies using a Java DTM implementation of the algorithms on micro and macro benchmarks reveal the conditions when they improve transactional throughput over flat nesting, and also their relative advantages and disadvantages.
Discrete Event Simulation is a widely used technique that is used to model and analyze complex systems in many fields of science and engineering. The increasingly large size of simulation models poses a serious comput...
详细信息
ISBN:
(纸本)9781509035052
Discrete Event Simulation is a widely used technique that is used to model and analyze complex systems in many fields of science and engineering. The increasingly large size of simulation models poses a serious computational challenge, since the time needed to run a simulation can be prohibitively large. For this reason, parallel and Distributes Simulation techniques have been proposed to take advantage of multiple execution units which are found in multicore processors, cluster of workstations or HPC systems. The current generation of HPC systems includes hundreds of thousands of computing nodes and a vast amount of ancillary components. Despite improvements in manufacturing processes, failures of some components are frequent, and the situation will get worse as larger systems are built. In this paper we describe FT-GAIA, a software-based fault-tolerant extension of the GAIA/ART` IS parallel simulation middleware. FT-GAIA transparently replicates simulation entities and distributes them on multiple execution nodes. This allows the simulation to tolerate crash-failures of computing nodes;furthermore, FT-GAIA offers some protection against Byzantine failures since synchronization messages are replicated as well, so that the receiving entity can identify and discard corrupted messages. We provide an experimental evaluation of FT-GAIA on a running prototype. Results show that a high degree of fault tolerance can be achieved, at the cost of a moderate increase in the computational load of the execution units.
Technical advances are enabling a pervasive computational ecosystem that integrates computing infrastructures with embedded sensors and actuators, and are giving rise to a new paradigm for monitoring, understanding, a...
详细信息
ISBN:
(纸本)9781424416936
Technical advances are enabling a pervasive computational ecosystem that integrates computing infrastructures with embedded sensors and actuators, and are giving rise to a new paradigm for monitoring, understanding, and managing natural and engineered systems - one that is information/data-driven. This research investigates programming systems for sensor-driven applications. It addresses abstractions and runtime mechanisms for integrating sensor systems with computational models for scientific processes, as well as for in-network data processing, e.g., aggregation, adaptive interpolation and assimilation. The current status of this research, as well as initial results are presented.
For the next generation of distributedsystems it is foreseen to enable new powerful applications based on system collaboration for dynamic integration of functionalities. This requires a certain level of autonomy for...
详细信息
ISBN:
(纸本)9781509036011
For the next generation of distributedsystems it is foreseen to enable new powerful applications based on system collaboration for dynamic integration of functionalities. This requires a certain level of autonomy for self-managing systems to change their effective and deterministic behavior during operation. In many application domains, however, collaboration processes for new higher-level functionalities are safety critical and an appropriate safety assurance approach is still missing. To ensure that the current operational situation based on an adapted system behavior is safe, we propose a safety evaluation with dynamic safety contracts between involved parties. The approach is based on a continuous monitoring, sharing and calculation of safety related quality characteristics of systems at runtime. We demonstrate the feasibility of our approach with a use case from the automotive domain.
The increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributedsystems require performanc...
详细信息
The increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributedsystems require performance technology that can be flexibly configured to observe different events and associated performance data of interest. It must also be possible to integrate performance evaluation techniques with the programming paradigms and softwareengineering methods. This is particularly important for tracking performance on parallelsoftware projects involving many code teams over many stages of development. This paper describes the integration of the TAU and XPARE tools in the Uintah Computational Framework (UCF). Discussed is the use of performance mapping techniques to associate low-level performance data to higher levels of abstraction in UCF and the use of performance regression testing to provide a historical portfolio of the evolution of application performance. A scalability study shows the benefits of integrating performance technology in building large-scale parallel applications.
As computational devices continue to advance, there are reasons to examine their foundations a little more deeply, and to ask whether there may not be something more to be found. The fundamental manner in which hardwa...
详细信息
ISBN:
(纸本)0769523129
As computational devices continue to advance, there are reasons to examine their foundations a little more deeply, and to ask whether there may not be something more to be found. The fundamental manner in which hardware and software interact is poorly understood, and yet there is little indication in the literature that this is being discussed or explored. In spite of our technological achievements, we are at a loss to precisely define the boundaries between hardware and software, and to describe the nature of their interface. This paper aims to raise some of the major issues and questions, to propose a hardware-information duality, and to suggest directions in which further research might be pursued.
One of the most promising approaches in developing component-based (possibly distributed) systems is that of coordination models and languages. Coordination programming enjoys a number of advantages such as the abilit...
详细信息
One of the most promising approaches in developing component-based (possibly distributed) systems is that of coordination models and languages. Coordination programming enjoys a number of advantages such as the ability to express different software architectures and abstract interaction protocols, support for multi-linguality, reusability and programming-in-the-large, etc. Configuration programming is another promising approach in developing large scale, component-based systems, with the increasing need for supporting the dynamic evolution of components. In this paper we explore and exploit the relationship between the notions of coordination and (dynamic) configuration and we illustrate the potential of control- or event-driven coordination languages to be used as languages for expressing dynamically reconfigurable software architectures. We argue that control-driven coordination has similar goals and aims with the notion of dynamic configuration and we illustrate how the former can achieve the functionality required by the latter. (C) 2001 Elsevier Science B.V. All rights reserved.
Multi-domain unified modeling is an important development direction in the study of complex system. Modelica is a popular multi-modeling language. It describes complex systems by mathematical equations, solves the hig...
详细信息
ISBN:
(纸本)9780769550602
Multi-domain unified modeling is an important development direction in the study of complex system. Modelica is a popular multi-modeling language. It describes complex systems by mathematical equations, solves the high-index of Differential algebraic equations (DAE) generated by modeling. But in this process, the index reduction based on structural index, which is a key step of solving high-index DAE, will fail with small probability. Based on the combinatorial optimization theory, it analyzes the incorrect problem leaded by the index reduction algorithm for solving the DAE, gives the algorithm of detecting and correcting the incorrect of structural index reduction for matrix pencils. It implements the algorithm of detecting and correcting, and apply the algorithm into solving first-order linear time-invariant DAE system. The experiment result shows that for first-order linear time-invariant DAE, the problem about the failure of structural index reduction can be solved by the combinatorial optimization theory.
The domains of parallel and distributed computing have been converging continuously up to the degree that state-of-the-art server computer systems incorporate characteristics from both domains: They comprise a hierarc...
详细信息
ISBN:
(纸本)9781509036820
The domains of parallel and distributed computing have been converging continuously up to the degree that state-of-the-art server computer systems incorporate characteristics from both domains: They comprise a hierarchy of enclosures, where each enclosure houses multiple processor sockets and each socket again contains multiple memory controllers. A global address space and cache coherency are facilitated using multiple layers of fast interconnection technologies even across enclosures. The growing popularity of such systems creates an urge for efficient mappings of cardinal algorithms onto such hierarchical architectures. However, the growing complexity of such systems and the inconsistencies between implementation strategies of different hardware vendors make it increasingly harder to do find efficient mapping strategies that are universally valid. In this paper, we present scalable optimization and mapping strategies in a case study of the popular Scale-Invariant Feature Transform (SIFT) computer vision algorithm. Our approaches are evaluated using a state-of-the-art hierarchical Non-Uniform Memory Access (NUMA) system with 240 physical cores and 12 terabytes of memory, apportioned across 16 NUMA nodes (sockets). SIFT is particularly interesting since the algorithm utilizes a variety of common data access patterns, thus allowing us to discuss the scaling properties of optimization strategies from the distributed and parallel computing domains and their applicability on emerging server systems.
暂无评论