High performance and reliability are the main goals of parallel and distributed computing systems. To increase the performance and reliability of the systems, various checkpoint schemes have been proposed in the liter...
详细信息
ISBN:
(纸本)3540340793
High performance and reliability are the main goals of parallel and distributed computing systems. To increase the performance and reliability of the systems, various checkpoint schemes have been proposed in the literature for decades. However, the lack of general analytical models has been an obstacle to compare the performance of systems employing different checkpoint schemes. This paper develops an analytical model to evaluate the relative response time of systems employing checkpoint schemes. The model has been applied to evaluate the relative response time of systems employing RFC (Roll-Forward Checkpoint), DMR-F (Double Modular Redundancy for Forward recovery), and DST (Duplex with Self-Test) schemes. The result shows the feasibility of the model developed in the paper.
Diversity and evolution in database applications often result in a multidatabase environment in which corporate data are stored in multiple, distributed data sources, each managed by an independent database management...
详细信息
Diversity and evolution in database applications often result in a multidatabase environment in which corporate data are stored in multiple, distributed data sources, each managed by an independent database management system. One of the essential functions of a multidatabase system is to provide inter-database access: the capability of evaluating global queries that require access to multiple data sources. This paper compares three common relational multidatabase approaches: the federated approach, the gateway approach, and the middleware approach from the perspective of global query performance. In particular, we examine their architectural impact on the applicability of pipelined query processingtechniques and load balancing. We present a performance comparison based on a detailed simulation. The study suggests that the middleware approach, which is the most cost-effective solution among the three, provides better or comparable performance to the other two approaches.
Deep learning developed in the last decade and has been established as recent, modern and very promising technique with large potential to be successfully applied in various domains. Despite deep learning outperforms ...
详细信息
Wireless Sensor Networks (WSNs) have become a hot research topic in recent years. They have many potential applications for both civil and military tasks. However, the unattended nature of WSNs and the limited computa...
详细信息
ISBN:
(纸本)9780769550947
Wireless Sensor Networks (WSNs) have become a hot research topic in recent years. They have many potential applications for both civil and military tasks. However, the unattended nature of WSNs and the limited computational and energy resources of their nodes make them susceptible to many types of attacks. Intrusion detection is one of the major and efficient defence methods against attacks in a network infrastructure. Intrusion Detection Systems can be seen as the second line of defence and they complement the security primitives that are adopted in order to prevent attacks against the computer network being protected. The peculiar features of a wireless sensor network pose stringent requirements to the design of intrusion detection systems. In this paper, we propose a hybrid, lightweight, distributed Intrusion Detection System (IDS) for wireless sensor networks. This IDS uses both misuse-based and anomaly-based detection techniques. It is composed of a Central Agent, which performs highly accurate intrusion detection by using data mining techniques, and a number of Local Agents running lighter anomaly-based detection techniques on the motes. Decision trees have been adopted as classification algorithm in the detection process of the Central Agent and their behaviour has been analysed in selected attacks scenarios. The accuracy of the proposed IDS has been measured and validated through an extensive experimental campaign. This paper presents the results of these experimental tests.
This paper summarizes our experiences and findings in teaching the concepts of parallel computing in two undergraduate programming courses and an undergraduate hardware design course. The first is a junior-senior leve...
详细信息
In edge computing systems, computation is rather offloaded to nearby resources than to the cloud, due to latency reasons. However, the performance demand in the edge grows steadily, which makes nearby resources insuff...
详细信息
ISBN:
(纸本)9781538651568
In edge computing systems, computation is rather offloaded to nearby resources than to the cloud, due to latency reasons. However, the performance demand in the edge grows steadily, which makes nearby resources insufficient for many applications. Additionally, the amount of parallel tasks in the edge increases, based on trends like machine learning, Internet of Things, and artificial intelligence. This introduces a trade-off between the performance of the cloud and the communication latency of the edge. However, many edge devices have powerful co-processors in form of their graphics-processing unit (GPU), which are mostly unused. These processing units have specialized parallel architectures, which are different from standard CPUs and complex to use. In this paper, we present GPU-accelerated task execution for edge computing environments. The paper has four contributions. First, we design and implement a GPU system extension for our Tasklet system - a distributed computing system, which supports edge- and cloud-based task offloading. Second, we introduce a computational abstraction for GPUs in form of a virtual machine, which exploits parallelism while considering device heterogeneity and maintaining unobtrusiveness. Third, we offer an easy-to-use programming interface for the rather complex architecture of GPUs. Fourth, we evaluate our prototype in a real-world testbed and compare the GPU performance to standard edge resources.
In recent years, heterogeneous hardware have generalized in almost all supercomputer nodes, requiring a profound shift on the way numerical applications are implemented. This paper, illustrates the design and implemen...
详细信息
ISBN:
(纸本)9783319969831;9783319969824
In recent years, heterogeneous hardware have generalized in almost all supercomputer nodes, requiring a profound shift on the way numerical applications are implemented. This paper, illustrates the design and implementation of a seismic wave propagation simulator, based on the finite-differences numerical scheme, and specifically tailored for such massively parallel hardware infrastructures. The application data-flow is built on top of PaRSEC, a generic task-based runtime system. The numerical kernels, designed for maximizing data reuse can efficiently leverage large SIMD units available in modern CPU cores. A strong scalability study on a cluster of Intel KNL processors illustrates the application performances.
We investigate the effect of hard faults on a massively-parallel implementation of the Sparse Grid Combination Technique (SGCT), an efficient numerical approach for the solution of high-dimensional time-dependent PDEs...
详细信息
ISBN:
(纸本)9783319589435;9783319589428
We investigate the effect of hard faults on a massively-parallel implementation of the Sparse Grid Combination Technique (SGCT), an efficient numerical approach for the solution of high-dimensional time-dependent PDEs. The SGCT allows us to increase the spatial resolution of a solver to a level that is out of scope with classical discretization schemes due to the curse of dimensionality. We exploit the inherent data redundancy of this algorithm to obtain a scalable and fault-tolerant implementation without the need of checkpointing or process replication. It is a lossy approach that can guarantee convergence for a large number of faults and a wide range of applications. We present first results using our fault simulation framework - and the first convergence and scalability results with simulated faults and algorithm-based fault tolerance for PDEs in more than three dimensions.
Rapid development of infrared detector arrays caused a need to develop robust signal processing chain able to perform operations on infrared image in real-time. Every infrared detector array suffers from so-called non...
详细信息
ISBN:
(纸本)9780819481245
Rapid development of infrared detector arrays caused a need to develop robust signal processing chain able to perform operations on infrared image in real-time. Every infrared detector array suffers from so-called nonuniformity, which has to be digitally compensated by the internal circuits of the camera. Digital circuit also has to detect and replace signal from damaged detectors. At the end the image has to be prepared for display on external display unit. For the best comfort of viewing the delay between registering the infrared image and displaying it should be as short as possible. That is why the image processing has to be done with minimum latency. This demand enforces to use special processingtechniques like pipelining and parallelprocessing. Designed infrared processing module is able to perform standard operations on infrared image with very low latency. Additionally modular design and defined data bus allows easy expansion of the signal processing chain. Presented image processing module was used in two camera designs based on uncooled microbolometric detector array form ULIS and cooled photon detector from Sofradir. The image processing module was implemented in FPGA structure and worked with external ARM processor for control and coprocessing. The paper describes the design of the processing unit, results of image processing, and parameters of module like power consumption and hardware utilization.
The verification of web services becomes a challenge in software verification. This paper presents a framework for verification of web service interfaces at various abstraction levels. Its foundation is the interface ...
详细信息
暂无评论