Large-scale graph-structured computation usually exhibits iterative and convergence-oriented computing nature, where input data is computed iteratively until a convergence condition is reached. Such features have led ...
详细信息
ISBN:
(纸本)9781450332057
Large-scale graph-structured computation usually exhibits iterative and convergence-oriented computing nature, where input data is computed iteratively until a convergence condition is reached. Such features have led to the development of two different computation modes for graph-structured programs, namely synchronous (Sync) and asynchronous (Async) modes. Unfortunately, there is currently no in-depth study on their execution properties and thus programmers have to manually choose a mode, either requiring a deep understanding of underlying graph engines, or suffering from suboptimal performance. This paper makes the first comprehensive characterization on the performance of the two modes on a set of typical graph-parallel applications. Our study shows that the performance of the two modes varies significantly with different graph algorithms, partitioning methods, execution stages, input graphs and cluster scales, and no single mode consistently outperforms the other. To this end, this paper proposes Hsync, a hybrid graph computation mode that adaptively switches a graph-parallel program between the two modes for optimal performance. Hsync constantly collects execution statistics on-the-fly and leverages a set of heuristics to predict future performance and determine when a mode switch could be profitable. We have built online sampling and offline profiling approaches combined with a set of heuristics to accurately predicting future performance in the two modes. A prototype called PowerSwitch has been built based on PowerGraph, a state-of-the-art distributed graph-parallel system, to support adaptive execution of graph algorithms. On a 48-node EC2-like cluster, PowerSwitch consistently outperforms the best of both modes, with a speedup ranging from 9% to 73% due to timely switch between two modes. Copyright 2015 ACM.
This paper aims at developing a novel high precision compression method. Based on the previously developed Fuzzy Transform Compression (FTC), we design and implement a modified version of the algorithm, referred to as...
详细信息
This paper aims at developing a novel high precision compression method. Based on the previously developed Fuzzy Transform Compression (FTC), we design and implement a modified version of the algorithm, referred to as Fuzzy Compression Adaptive Transform (FuzzyCAT). The underlying idea of FuzzyCAT is to adapt the transform parameters to the signal's curvature inferred from the time derivatives. FuzzyCAT outperforms the original FTC while preserving its favorable qualities like periodicity and resilience to lost packets. It also shows a competitive edge over the Lightweight Temporal Compression (LTC) method. A series of experiments with a network of TelosB sensor nodes revealed that transmission costs of the FuzzyCAT algorithm is much less than that of LTC at the expense of a slight increase in processing power. This makes it an outstanding candidate for data compression in wireless sensor networks.
Data aggregation is a key element in many applications that draw insights from data analytics, such as medical research, smart metering, recommendation systems and real-time marketing. In general, data is gathered fro...
详细信息
High quality video streaming for mobile users is difficult to achieve in some areas of the world due to poor broadband capacity and sparse network coverage. We propose a bandwidth-sharing scheme to allow users with li...
详细信息
The well-orchestrated use of distilled experience, domain-specific knowledge, and well-informed trade-off decisions is imperative if we are to design effective architectures for complex software-intensive systems. In ...
The well-orchestrated use of distilled experience, domain-specific knowledge, and well-informed trade-off decisions is imperative if we are to design effective architectures for complex software-intensive systems. In particular, designing modern self-adaptive systems requires intricate decision-making over a remarkably complex problem space and a vast array of solution mechanisms. Nowadays, a large number of approaches tackle the issue of endowing software systems with self-adaptive behavior from different perspectives and under diverse assumptions, making it harder for architects to make judicious decisions about design alternatives and quality attributes trade-offs. It has currently been claimed that search-based software design approaches may improve the quality of resulting artifacts and the productivity of design processes, as a consequence of promoting a more comprehensive and systematic representation of design knowledge and preventing design bias and false intuition. To the best of our knowledge, no empirical studies have been performed to provide sound evidence of such claim in the self-adaptive systems domain. This paper reports the results of a quasi-experiment performed with 24 students of a graduate program in distributed and Ubiquitous Computing. The experiment evaluated the design of self-adaptive systems using a search-based approach proposed by us, in contrast to the use of a non-automated approach based on architectural styles catalogs. The goal was to investigate to which extent the adoption of search-based design approaches impacts on the effectiveness and complexity of resulting architectures. In addition, we also analyzed the approach’s potential for leveraging the acquisition of distilled design knowledge. Our findings show that search-based approaches can improve the effectiveness of resulting self-adaptive systems architectures and reduce their design complexity. We found no evidence regarding the approach’s potential for leveraging the acqu
Due to voltage and structure shrinking, the influence of radiation on a circuit's operation increases, resulting in future hardware designs exhibiting much higher rates of soft errors. Software developers have to ...
详细信息
ISBN:
(纸本)9781467392907
Due to voltage and structure shrinking, the influence of radiation on a circuit's operation increases, resulting in future hardware designs exhibiting much higher rates of soft errors. Software developers have to cope with these effects to ensure functional safety. However, software-based hardware fault tolerance is a holistic property that is tricky to achieve in practice, potentially impaired by every single design decision. We present FAIL*, an open and versatile architecture-level fault-injection (FI) framework for the continuous assessment and quantification of fault tolerance in an iterative software development process. FAIL* supplies the developer with reusable and composable FI campaigns, advanced pre-and post-processing analyses to easily identify sensitive spots in the software, well-abstracted back-end implementations for several hardware and simulator platforms, and scalability of FI campaigns by providing massive parallelization. We describe FAIL*, its application to the development process of safety-critical software, and the lessons learned from a real-world example.
The ASCENS project works with systems of self-aware, selfadaptive and self-expressive ensembles. Performance awareness represents a concern that cuts across multiple aspects of such systems, from the techniques to acq...
详细信息
This paper proposes an efficient non-blocking coordinated checkpointing algorithm for distributed message passing system which uses transitive dependency information. The processes synchronize their checkpointing acti...
详细信息
This paper presents a novel approach intended to detect sinkholes in MANETs running AODV. The study focuses on the detection of thewell-known sinkhole attack, devoted to attractmost of the surrounding network traffic ...
详细信息
暂无评论