Deploying V2X services has become a challenging task. This is mainly due to the fact that such services have strict latency requirements. To meet these requirements, one potential solution is adopting mobile edge comp...
详细信息
ISBN:
(纸本)9781728197227
Deploying V2X services has become a challenging task. This is mainly due to the fact that such services have strict latency requirements. To meet these requirements, one potential solution is adopting mobile edge computing (MEC). However, this presents new challenges including how to find a cost efficient placement that meets other requirements such as latency. In this work, the problem of cost-optimal V2X service placement (CO-VSP) in a distributed cloud/edge environment is formulated. Additionally, a cost-focused delay-aware V2X service placement (DA-VSP) heuristic algorithm is proposed. Simulation results show that both CO-VSP model and DA-VSP algorithm guarantee the QoS requirements of all such services and illustrates the tradeoff between latency and deployment cost.
Due to the limited resources of wireless sensor network nodes, they possess generally weak defense capabilities and are often the target of malware attacks. Attackers can capture or infect specific sensor nodes and pr...
详细信息
Edge computing usage in many applications, such as transportation and healthcare, has been becoming popular nowadays. These applications often use deep learning (DL) prediction, which are highly dependent on time-seri...
详细信息
ISBN:
(纸本)9781728198668
Edge computing usage in many applications, such as transportation and healthcare, has been becoming popular nowadays. These applications often use deep learning (DL) prediction, which are highly dependent on time-series data collected by the sensors in the edge devices. However, the presence of noise in the on-device sensors negatively affects the sensing output of the DL models. Recently proposed time-series based DL approaches (e.g., SADeepSense) address this issue with the assumption that in the presence of noise, the correlation of sensor inputs in an edge device changes. In this paper, through real experiments, we notice that this assumption may not hold true in the presence of shot noise. To handle this problem, in order to further improve the prediction accuracy, we propose a DL model, namely NoiseSenseDNN, which more accurately extracts the correlation between different sensor inputs over time in the presence of both shot and white noise due to its unique architecture. We further propose a compressed version of NoiseSenseDNN that minimizes the inference time and consumed energy of the edge device while meeting the accuracy requirement. Our experiments on a workstation and a real edge device and three real traces show that NoiseSenseDNN outperforms SADeepSense in accuracy, and the compressed NoiseSenseDNN significantly reduces inference time and energy consumption while meeting the required accuracy.
Federated Learning coordinates multiple devices to train a shared model while preserving data privacy. Despite its potential benefit, the increasing number of participating devices poses new challenges to the deployme...
Federated Learning coordinates multiple devices to train a shared model while preserving data privacy. Despite its potential benefit, the increasing number of participating devices poses new challenges to the deployment in real-world cases. The highly limited amount of data located on each device coupled with significantly unbalanced data across different devices severely impede the performance of the shared model and the overall training progress at the same *** this paper, we propose FedHybrid, a hierarchical hybrid training framework for high-performance Federated Learning on a wide scale. Unlike the existing work that mainly focuses on the statistical challenge, FedHybrid establishes a hierarchical hybrid training framework that effectively utilizes the fragmented and unbalanced data located on the participating devices on a wide scale. Specifically, FedHybrid consists of the following two core components, a global coordinator deployed on the central server and a local coordinator deployed on each participating device. The global coordinator organizes the participating devices into different groups through jointly considering the system heterogeneity and unbalanced training data in order to accelerate the overall training progress while guaranteeing the model performance. Within each group, a novel device-to-device (D2D) sequential training procedure is coordinated by the local coordinator to effectively utilize the fragmented and unbalanced training data in order to intelligently update the local models. At the same time, we provide the theoretical analysis of FedHybrid and conduct extensive experiments to evaluate its effectiveness. The results show that FedHybrid effectively improves model accuracy up to 27% and accelerates the whole training process by 20% on average.
The outbreak of the coronavirus disease 2019 (COVID-19) has become the worst public health event in the whole world, threatening the physical and mental health of hundreds of millions of people. However, because of th...
详细信息
The outbreak of the coronavirus disease 2019 (COVID-19) has become the worst public health event in the whole world, threatening the physical and mental health of hundreds of millions of people. However, because of the high survivability of the virus, it is impossible for humans to eliminate viruses completely. For this reason, it is particularly important to strengthen the prevention of the transmission of viruses and monitor the physical status of the crowd. Wireless sensors are a key player in the fight against the current global outbreak of the Covid-19 pandemic, where they are playing an important role in monitoring human health. The Wireless Body Area Network (WBAN) composed of these wireless sensor devices can monitor human health data without interference for a long time, and update the data in almost real time through the Internet of Things (IoT). However, because the data monitored by the devices is relatively large and the transmission distance is long, only transmitting the data to medical centers through the personal devices (PB) cannot get feedback in time. We propose a non-cooperative game-based server placement method, which is named ESP-19 to improve the efficiency of transmission data of wireless sensors. In this paper, experimental tests are conducted based on the distribution of Shanghai Telecom's base stations, and then the performance of ESP-19 is evaluated. The results show that the proposed method in this paper outperforms the comparison method in terms of service delay.
In marine meteorological sensor networks (MMSN), there are massive data flows transmitted within numerous nodes, resulting in serious potential consequences once any anomalous traffic implied launches an attack. There...
详细信息
In marine meteorological sensor networks (MMSN), there are massive data flows transmitted within numerous nodes, resulting in serious potential consequences once any anomalous traffic implied launches an attack. Therefore, accurate identification and fast response to abnormal traffic is vital for intrusion detection system (IDS) constructions. Dataset imbalances cause classification models to erroneously bias to normal traffic, significantly restricting IDS developments and applications. This paper proposes an approach to deal with dataset imbalances in intrusion detections. This approach mitigates dataset imbalance impacts on IDSs from the data perspective, which is liable to process the input data in classification models. In this approach, CVAE-GAN is adopted as the data generation module to synthesize specified minority class samples, thus reducing dataset imbalance rate. ordering points to identify the clustering structure (OPTICS) is taken as the denoising algorithm to remove outliers and decrease the overlap extent between majority classes. An experiment on NSL-KDD dataset demonstrates that the proposed method obtains a high-quality dataset with reasonable distribution. This approach improves the classifier's identification ability for potential anomalous traffic.
Modern realities and the rapid development of digital technologies in the era of Industry 4.0 bring to the fore the task of developing intelligent systems for precision agriculture. Grazing livestock has proven its im...
详细信息
The evolution of distributed Database Management systems (DBMSs) has led to heterogeneity in DBMS technologies. Particularly DBMSs applying a shared-nothing approach enable distributed operation and support fine-grain...
详细信息
ISBN:
(纸本)9780738123943
The evolution of distributed Database Management systems (DBMSs) has led to heterogeneity in DBMS technologies. Particularly DBMSs applying a shared-nothing approach enable distributed operation and support fine-grained configurations of distribution characteristics such as replication degree and consistency. Overall, the operation of such DBMSs on IaaS clouds leads to a large configuration space involving different cloud providers, cloud resources and pricing models. The selection of a specific configuration impacts nonfunctional features such as performance, availability, consistency, but also costs of the deployment. In consequence, these need to be traded-off against each other and a suitable configuration needs to be found, satisfying technical and operational aspects. Yet, due to the strong interdependency between different non-functional features as well as the large number of DBMSs, configuration options, and cloud providers, a manual analysis and comparison is not possible. In this paper, we present Hathi, an evaluation-driven Multi Criteria Decision Making (MCDM) framework for planning of cloud-hosted distributed DBMS. By specifying DBMS configurations, workloads, and cloud offers, Hathi automatically performs experiments and evaluates their results. These are then matched against a list of user-defined preferences using an MCDM algorithm. Our evaluation shows that Hathi is able of performing largescale evaluation scenarios involving multiple DBMS in various cluster sizes, cloud providers, and cloud offers. Hathi can weight the resulting data and derives deployment recommendations with respect to throughput, latency, cost, consistency, availability, and stability.
Scalability and management cost in cloud computing are few of the top challenges for the cloud providers and large enterprises. In this paper, we present Arktos, a cloud infrastructure platform for managing large-scal...
详细信息
Scalability and management cost in cloud computing are few of the top challenges for the cloud providers and large enterprises. In this paper, we present Arktos, a cloud infrastructure platform for managing large-scale compute clusters and running millions of application instances as containers and/or virtual machines (VM). Arktos is envisioned as a stepping-stone from current “ single-region” focused cloud infrastructure towards next generation distributed infrastructure in the public and/or private cloud environments. We present details related to the Arktos system architecture and features, important design decisions, and the results and analysis of the performance benchmark testing. Arktos achieves high scalability by partitioning its architecture into two independent components, the resource partition (RP) and the tenant workload partition (TP), with each component scaling independently. Our performance testing using a benchmark tool demonstrates that Arktos with just two RPs and two TPs system setting can already manage a cluster of 50K compute nodes and is able to run 1.5 million workload containers with 5 times system throughput (QPS) 1 compared with an existing container management system. Three key characteristics differentiate Arktos from other open source cloud platforms such as OpenStack and Kubernetes. Firstly, Arktos architecture is a truly scalable architecture that supports a very large cluster by scaling to more RPs and TPs in the system, Secondly, it unifies the runtime infrastructure to run and manage both VM and container applications natively, therefore eliminating the cost of managing separate technology stacks for VMs and containers. Lastly, Arktos has a unique “ virtual cluster” style multi-tenancy design that provides both strong tenancy isolation, including network isolation and transparent resource view.
暂无评论