The questions of constructing optimal logical structure of a distributed database (DDB) are considered. Solving these issues will make it possible to increase the speed of processing requests in DDB in comparison with...
详细信息
Machine learning applications are getting prevelent on various computing platforms, including cloud servers, smart phones, IoT devices, etc. For these applications, security is one of the most emergent requirements. W...
详细信息
ISBN:
(纸本)9781728190747
Machine learning applications are getting prevelent on various computing platforms, including cloud servers, smart phones, IoT devices, etc. For these applications, security is one of the most emergent requirements. While trusted execution environment (TEE) like ARM TrustZone has been widely used to protect critical prodecures including fingerprint authentication and mobile payment, state-of-the-art implementations of TEE OS lack the support for multi-threading and are not suitable for computing-intensive workloads. This is because current TEE OSes are usually designed for hosting security critical tasks, which are typically small and non computing-intensive. Thus, most of TEE OSes do not support multi-threading in order to minimize the size of the trusted computing base (TCB). In this paper, we propose TEEp, a system that enables multi-threading in TEE without weakening security, and supports existing multi-threaded applications to run directly in TEE. Our design includes a novel multi-threading mechanism based on the cooperation between the TEE OS and the host OS, without trusting the host OS. We implement our system based on OP-TEE and port it to two platforms: a HiKey 970 development board as mobile platform, and a Huawei Hi1610 ARM server as server platform. We run TensorFlow Lite on the development board and TensorFlow on the server for performance evaluation in TEE. The result shows that our system can improve the throughput of TensorFlow Lite on 5 models to 3.2x when 4 cores are available, with 13.5% overhead compared with Linux on average.
distributed Denial of Service (DDoS) attacks pose a considerable threat to Cloud computing, Internet of Things (IoT) and other services offered on the Internet. The victim server receives terabytes of data per second ...
详细信息
Virtualized methods are gaining popularity nowadays. Cloud storage in some kind of a complex is also affected. The concept of this study is web grid connected capabilities and real environmental virtualized compute se...
详细信息
ISBN:
(纸本)9781665487351
Virtualized methods are gaining popularity nowadays. Cloud storage in some kind of a complex is also affected. The concept of this study is web grid connected capabilities and real environmental virtualized compute service. It emphasizes the congestion control mechanism as well. The suggested approach would use system dynamics to automatically migrate each computing infrastructure nodes and enhance its speed.
Big Data Progressive Sampling requires initial and final data bounding values to generate optimal number of samples in order to train any learning algorithm. Any learning algorithm can be trained for minimal hypothesi...
详细信息
To tackle the computation resource poorness on the end devices, task offloading is developed to reduce the task completion time and improve the Quality-of-Service (QoS). Edge computing facilitates such offloading by p...
详细信息
ISBN:
(纸本)9781728190747
To tackle the computation resource poorness on the end devices, task offloading is developed to reduce the task completion time and improve the Quality-of-Service (QoS). Edge computing facilitates such offloading by provisioning, resources at the proximity of the end devices. Nowadays, many tasks on end devices have an urgent demand for the security of execution environment. To address this problem, we introduce trusted execution environment (TEE) to empower edge computing for secure task offloading. To explore TEE, the offloading process should be redesigned with the introduction of data encryption and decryption. This makes traditional offloading optimization policy fail to be applied directly. To address this issue, we are motivated to take the data encryption and decryption into the offloading scheduling algorithm. In particular, we propose a Customized List Scheduling based Offloading (CLSO) algorithm, aiming at minimizing the total completion time with the consideration of energy budget limitations on the end devices. The experiment results show that our approximation algorithm can effectively reduce the total completion time and significantly outperforms existing, state-of-the-art offloading strategy.
distributed Denial of Service (DDoS) attacks in Named Data Networking (NDN), such as Cache Pollution Attacks (CPA) and Improved Collusive Interest Flooding Attacks (I-CIFA), can significantly threaten the NDN network....
详细信息
distributed Denial of Service (DDoS) attacks in Named Data Networking (NDN), such as Cache Pollution Attacks (CPA) and Improved Collusive Interest Flooding Attacks (I-CIFA), can significantly threaten the NDN network. However, most of the previous research has focused on the detection of a single attack by using Machine Learning algorithms and threshold-based methods, so the previous methods are not efficient enough for detecting dual attacks. This paper first proposes a dual attack by combining I-CIFA and CPA, and extracts the network traffic features, including the number of CacheMisses and the number of PIT entries. After analyzing the severe impact of the dual attack, a detection scheme BO-CatBoost is proposed based on Bayesian optimization and CatBoost. Finally, the experiment results show that the proposed detection scheme is robust in detecting the dual attack.
Python is emerging as a popular language in the data science community due to its ease-of-use, vibrant community, and rich set of libraries. Dask is a popular Python-based distributedcomputing framework that allows u...
详细信息
ISBN:
(纸本)9781665422925
Python is emerging as a popular language in the data science community due to its ease-of-use, vibrant community, and rich set of libraries. Dask is a popular Python-based distributedcomputing framework that allows users to process large amounts of data on parallel hardware. The Dask distributed package is a non-blocking, asynchronous, and concurrent library that offers support for distributed execution of tasks on datacenter and HPC environments. A few key requirements of designing high-performance communication backends for Dask distributed is to provide scalable support for coroutines that are unlike regular Python functions and can only be invoked from asynchronous applications. In this paper, we present Blink-a high-performance communication library for Dask on high-performance RDMA networks like InfiniBand. Blink offers a multi-layered architecture that matches the communication requirements of Dask and exploits high-performance interconnects using a Cython wrapper layer to the C backend. We evaluate the performance of Blink against other counterparts using various micro-benchmarks and application kernels on three different cluster testbeds with varying interconnect speeds. Our micro-benchmark evaluation reveals that Blink outperforms other communication backends by more than 3x for message sizes ranging from 1 Byte to 64 KByte, and by a factor of 2x for message sizes ranging from 128 KByte to 8 MByte. Using various application-level evaluations, we demonstrate that Dask achieves up to 7% improvement in application throughput (e.g., total worker throughput).
Over last few years the interest in large scale network processing has gained momentum due to the increased importance of efficient processing of such networks for the analysis and problem solving in Social networks, ...
详细信息
ISBN:
(纸本)9781450377515
Over last few years the interest in large scale network processing has gained momentum due to the increased importance of efficient processing of such networks for the analysis and problem solving in Social networks, Internet of Things(IoT), Data mining, Biological networks etc.. The steep increase in volume of data being produced, necessitates the development of parallel and distributed graph processing algorithms that demand extensive computation. Nowa days, because of high-computing-power of GP-GPUs (General Purpose Graphic Processor Unit) and its cost-effectiveness, GP-GPUs are being widely used in almost all areas of computing. In this paper, we address the problem of designing efficient parallel graph processing algorithms for GP-GPU platform which is the upcoming trend in high end machines. Presently, the problem of community detection and its influence on misinformation containment in online social networks have been investigated thoroughly, and parallel CUDA algorithms have been proposed. Simulation studies show a significant speed-up compared to the existing sequential/parallel techniques.
暂无评论