Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software develop...
详细信息
ISBN:
(数字)9781728114422
ISBN:
(纸本)9781728114422
Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software development has become the dominating paradigm. By following this approach, services can be rapidly constructed by composing existing, newly developed and publicly available third-party modules. However, newly added modules might be unstable, resource-intensive, or untrustworthy. Thus, satisfying non-functional requirements such as reliability, efficiency, and security while ensuring rapid release cycles is a challenging task. In this paper, we discuss how to tackle these issues by employing container virtualization to isolate modules from each other according to a specification of isolation constraints. We satisfy non-functional requirements for cloud services by automatically transforming the modules comprised into a container-based system. To deal with the increased overhead that is caused by isolating modules from each other, we calculate the minimum set of containers required to satisfy the isolation constraints specified. Moreover, we present and report on a prototypical transformation pipeline that automatically transforms cloud services developed based on the Java Platform Module System into container-based systems.
The Internet of Things (IoT) provides numerous opportunities for the connected healthcare industry, especially in the distributed environment known as fog which resides in a middle architectural layer adjacent to the ...
详细信息
ISBN:
(纸本)9781450359580
The Internet of Things (IoT) provides numerous opportunities for the connected healthcare industry, especially in the distributed environment known as fog which resides in a middle architectural layer adjacent to the network edge and user devices. This paper provides an overview of recent research and industry advances and challenges on fog technologies focusing on connected healthcare applications. We present and review for IoT and fog: concepts and architectures; development and lifecycle frameworks; security vulnerabilities, threats, and best practices; and connected medical device regulations.
Deep Reinforcement Learning (DRL) is substantially resource-consuming, and it requires large-scale distributedcomputing-nodes to learn complicated tasks, like video-game and Go play. This work attempts to down-scale ...
详细信息
ISBN:
(数字)9781665423243
Deep Reinforcement Learning (DRL) is substantially resource-consuming, and it requires large-scale distributedcomputing-nodes to learn complicated tasks, like video-game and Go play. This work attempts to down-scale a distributed DRL system into a specialized many-core chip and achieve energy-efficient on-chip DRL. With the customized Network-on-Chip that handles the communication of on-chip data and control-signals, we proposed a Synchronous Asynchronous RL Architecture (SARLA) and the according many-core chip that completely avoids the unnecessary data duplication and synchronization activities in multi-node RL systems. In evaluation, the SARLA system achieves considerable energy-efficiency boost over the GPU-based implementations for typical DRL workloads built with OpenAI-gym.
Population-based metaheuristics such as Evolutionary Algorithms (EAs) can require massive computational power for solving complex and large scale optimization problems. Hence, the parallel execution of EAs attracted t...
详细信息
ISBN:
(纸本)9783030336073;9783030336066
Population-based metaheuristics such as Evolutionary Algorithms (EAs) can require massive computational power for solving complex and large scale optimization problems. Hence, the parallel execution of EAs attracted the attention of researchers as a feasible solution in order to reduce the computation time. Several distributed frameworks and approaches utilizing different hardware and software technologies have been introduced in the literatures. Among them, the parallelization of EAs in cluster and cloud environments exploiting modern parallelcomputing techniques seems to be a promising approach. In the present paper, the parallel performance in terms of speedup using microservices, container virtualization and the publish/subscribe messaging paradigm to parallelize EAs based on the Coarse-Grained Model (so-called Island Model) is introduced. Four different communication topologies with scalable number of islands ranges between 1 and 120 are analyzed in order to show that a partial linear/superlinear speedup is achievable for the proposed approach.
Sketches are probabilistic data structures that can provide approximate results within mathematically proven error bounds while using orders of magnitude less memory than traditional approaches. They are tailored for ...
详细信息
ISBN:
(纸本)9783030294007;9783030293994
Sketches are probabilistic data structures that can provide approximate results within mathematically proven error bounds while using orders of magnitude less memory than traditional approaches. They are tailored for streaming data analysis on architectures even with limited memory such as single-board computers that are widely exploited for IoT and edge computing. Since these devices offer multiple cores, with efficient parallel sketching schemes, they are able to manage high volumes of data streams. However, since their caches are relatively small, a careful parallelization is required. In this work, we focus on the frequency estimation problem and evaluate the performance of a high-end server, a 4-core Raspberry Pi and an 8-core Odroid. As a sketch, we employed the widely used Count-Min Sketch. To hash the stream in parallel and in a cache-friendly way, we applied a novel tabulation approach and rearranged the auxiliary tables into a single one. To parallelize the process with performance, we modified the workflow and applied a form of buffering between hash computations and sketch updates. Today, many single-board computers have heterogeneous processors in which slow and fast cores are equipped together. To utilize all these cores to their full potential, we proposed a dynamic load-balancing mechanism which significantly increased the performance of frequency estimation.
A major yet trivial problem in the banking industry right now is how tedious and costly the traditional Know-Your-Customer(KYC) process is. The process is also tiresome for customers as they need to undergo the same p...
详细信息
The reconstruction of the haplotype pair for each chromosome is a hot topic in Bioinformatics and Genome Analysis. In Haplotype Assembly (HA), all heterozygous Single Nucleotide Polymorphisms (SNPs) have to be assigne...
详细信息
ISBN:
(纸本)9783030105495;9783030105488
The reconstruction of the haplotype pair for each chromosome is a hot topic in Bioinformatics and Genome Analysis. In Haplotype Assembly (HA), all heterozygous Single Nucleotide Polymorphisms (SNPs) have to be assigned to exactly one of the two chromosomes. In this work, we outline the state-of-the-art on HA approaches and present an in-depth analysis of the computational performance of GenHap, a recent method based on Genetic Algorithms. GenHap was designed to tackle the computational complexity of the HA problem by means of a divide-et-impera strategy that effectively leverages multi-core architectures. In order to evaluate GenHap's performance, we generated different instances of synthetic (yet realistic) data exploiting empirical error models of four different sequencing platforms (namely, Illumina NovaSeq, Roche/454, PacBio RS II and Oxford Nanopore Technologies MinION). Our results show that the processing time generally decreases along with the read length, involving a lower number of sub-problems to be distributed on multiple cores.
Cloud computing allows users to store data in a distributed cloud storage. So the data are stored in different geographical locations which will predominantly increase the vulnerability of the data. The natural way to...
详细信息
ISBN:
(纸本)9789811319518;9789811319501
Cloud computing allows users to store data in a distributed cloud storage. So the data are stored in different geographical locations which will predominantly increase the vulnerability of the data. The natural way to safeguard the data from the intruders is to encrypt the data before storing in the cloud. This paper proposes an Advanced Cramer-Shoup Cryptosystem to resolve the security threats in cloud environment. The proposed encryption method withstands well against the adaptive chosen cipher text attack (CCA2). While considering high velocity, encryption method has to work as fast as possible to improve the efficiency. The proposed ACS Cryptosystem supports batch encryption and decryption to increase the efficiency and reduces the computation overhead. The experimental analysis for the proposed method is carried out and observed that the proposed encryption technique improves the security of the data and results are compared with existing security algorithms.
This article discusses the issue of interaction with the custom task workflows on clusters their execution. Since tasks on multiprocessor systems can be performed for long periods of time, they should be quasi-interac...
详细信息
ISBN:
(数字)9781728169514
ISBN:
(纸本)9781728169521
This article discusses the issue of interaction with the custom task workflows on clusters their execution. Since tasks on multiprocessor systems can be performed for long periods of time, they should be quasi-interactive, allowing the occasional appearance of the user to input corrective instructions. In this article a proxy service is proposed that is an extension of the event-driven job management subsystem on the cluster. When launching computational experiments with event control, then the event control subsystem agent executed on the computational nodes prepares the environment for tracking the events of the experiment by operational system means. After the computations are done the event control subsystem agent clears up operational system settings for tracking events. The role of proxy service is to interact with the event control subsystem, listen to events and output streams that come from the parallel program, keep track of changes in the parallel program filesystem in its own database, and when is asked return information on the parallel program state and files to a user interface, or feed input data back to the parallel program. The article also discusses an I/O agent that is launched on each node of the computational cluster immediately before starting the computational process. The agent normally intercepts stdin and stdout streams from the controlled process, but can also interact with it via sockets, increasing the capabilities of the computing process to interact with the user.
Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event dri...
详细信息
ISBN:
(数字)9781665403986
ISBN:
(纸本)9781665403993
Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event driven architecture. The platform consists of the data layer, the model layer, and the excursion layer, which is built in a component, pluggable, and loosely coupled way. The platform adopts parallelization acceleration for backtesting and optimizing parameters of portfolio models in a certain historical interval. The platform is able to docking portfolio model with real-time market. Based on the HPPO platform, a parallel program is designed to optimize the parameters of the value at risk(VAR) model. The performance of the platform are summarized by analyzing the experimental results and comparing with the open source framework Zipline and Rqalpha.
暂无评论