The complexity inherent in managing cloud computingsystems calls for novel solutions that can effectively enforce high-level Service Level Objectives (SLOs) promptly. Unfortunately, most of the current SLO management...
详细信息
ISBN:
(纸本)9798350304817
The complexity inherent in managing cloud computingsystems calls for novel solutions that can effectively enforce high-level Service Level Objectives (SLOs) promptly. Unfortunately, most of the current SLO management solutions rely on reactive approaches, i.e., correcting SLO violations only after they have occurred. Further, the few methods that explore predictive techniques to prevent SLO violations focus solely on forecasting low-level system metrics, such as CPU and Memory utilization. Although valid in some cases, these metrics do not necessarily provide clear and actionable insights into application behavior. This paper presents a novel approach that directly predicts high-level SLOs using low-level system metrics. We target this goal by training and optimizing two state-of-the-art neural network models, a Short-Term Long Memory LSTM-, and a Transformer-based model. Our models provide actionable insights into application behavior by establishing proper connections between the evolution of low-level workload-related metrics and the high-level SLOs. We demonstrate our approach to selecting and preparing the data. We show in practice how to optimize LSTM and Transformer by targeting efficiency as a high-level SLO metric and performing a comparative analysis. We show how these models behave when the input workloads come from different distributions. Consequently, we demonstrate their ability to generalize in heterogeneous systems. Finally, we operationalize our two models by integrating them into the Polaris framework we have been developing to enable a performance-driven SLO-native approach to Cloud computing.
Incentive mechanisms are increasingly needed in opportunistic networks that contain nodes with selfish behavior. For this, credit-based mechanisms need a virtual bank (central entity) to promote the incentive. However...
ISBN:
(纸本)9798350333398
Incentive mechanisms are increasingly needed in opportunistic networks that contain nodes with selfish behavior. For this, credit-based mechanisms need a virtual bank (central entity) to promote the incentive. However, the existence of this central entity in an opportunistic network may be challenging. Therefore, we propose a credit incentive mechanism in this paper. The main contribution is that the mechanism does not use a virtual bank to distribute credits (reward for forwarding messages) but rather a decentralized approach. In addition, we propose mathematical modeling to represent the collection and distribution of credits to avoid Edge Insertion attacks in some instances. Finally, the proposed mechanism was evaluated through simulation using real mobility traces and different routing protocols and compared its performance with the RELICS incentive mechanism. The obtained results show that the proposed mechanism is promising in diminishing the occurrence of selfish nodes.
The proceedings contain 15 papers. The special focus in this conference is on Cloud computing and Artificial Intelligence: Technologies and Applications. The topics include: CAReNet: A Promising AI Architecture for...
ISBN:
(纸本)9783031786976
The proceedings contain 15 papers. The special focus in this conference is on Cloud computing and Artificial Intelligence: Technologies and Applications. The topics include: CAReNet: A Promising AI Architecture for Low Data Regime Mixing Convolutions and Attention;Designing Converged Middleware for HPC, AI, and Big Data: Challenges and Opportunities;key Mechanisms and Emerging Issues in Cloud Identity systems;Power Consumption in HPC-AI systems;integrated Architecture for Cloud and IoT with Logical sensors and Actuators - Logical IoT Cloud;The Need for HPC in AI Solutions;facts and Issues of Neural Networks for Numerical Simulation;scalable Deep Learning for Industry 4.0: Speedup with distributed Deep Learning and Environmental Sustainability Considerations;multi-domain Dataset for Moroccan Arabic Dialect Sentiment Analysis in Social Networks;on the Challenges of Migrating to Microservices Architectures for Better Cloud Solutions;missing Data Imputation Approach for IoT Using Machine Learning;holistic Approach for Enhanced Object Recognition in Complex Environments;analyzing Sentiment in Arabic Tweets: A Study Using Machine Learning and Deep Learning Techniques.
In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by t...
详细信息
ISBN:
(数字)9781665471770
ISBN:
(纸本)9781665471770
In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by their inherent high computing demand and the limitation of vehicular computing power. Vehicular edge computing (VEC) has shown great potential to support these applications by providing computing and storage capacity in close proximity. For facing the heterogeneous nature of in-vehicle applications and the highly dynamic network topology in the Internet-of-Vehicle (IoV) environment, how to achieve efficient scheduling of computational tasks is a critical problem. Accordingly, we design a two-layer distributed online task scheduling framework to maximize the task acceptance ratio (TAR) under various QoS requirements when facing unbalanced task distribution. Briefly, we implement the computation offloading and transmission scheduling policies for the vehicles to optimize the onboard computational task scheduling. Meanwhile, in the edge computing layer, a new distributed task dispatching policy is developed to maximize the utilization of system computing power and minimize the data transmission delay caused by vehicle motion. Through single-vehicle and multi-vehicle simulations, we evaluate the performance of our framework, and the experimental results show that our method outperforms the state-of-the-art algorithms. Moreover, we conduct ablation experiments to validate the effectiveness of our core algorithms.
The rapid rise in spatial data volumes from diverse sources necessitate efficient spatial data processing capability. Although most relational databases support spatial extensions of SQL query features, they offer lim...
详细信息
Benefitting from the combination of the idea of pipeline with model parallelism and data parallelism, pipeline parallelism improves the efficiency of distributed deep learning systems significantly. However, suffering...
详细信息
Machine Learning (ML) is one of the effective security approaches to build cyber-attacks detection systems in Wireless sensor Networks (WSNs). ML leverages the power of data analysis and pattern recognition to detect ...
详细信息
ISBN:
(纸本)9798350372977;9798350372984
Machine Learning (ML) is one of the effective security approaches to build cyber-attacks detection systems in Wireless sensor Networks (WSNs). ML leverages the power of data analysis and pattern recognition to detect and classify various types of cyber-attacks to enhance the security of WSNs. A well-constructed dataset is one of the key factors that significantly impact the performance and generalization capabilities of any ML classifier trained on it. In this paper, we evaluate the effectiveness of two datasets: WSN-DS and WSN-BFSF which are specialized for Denial-of-Service (DoS) attacks targeting WSNs. We compare the two datasets in terms of their key characteristics, dataset quality, and ML classification performance. Mutual Information (MI) and Recursive Feature Elimination (RFE) are used for feature selection. The dataset quality is measured using statistical information calculation. The ML classification performance is investigated for three supervised ensemble techniques: LightGBM, bagging, and stacking using evaluation metrics including probability of detection, probability of false alarm, probability of misdetection, classification accuracy, model size, and processing time.
In-sensorcomputing has emerged as a promising approach to mitigating huge data transmission costs between sensors and processing units. Recently, the emerging application scenarios have raised more demands of sensory...
详细信息
In the pooled data problem we are given a set of n agents, each of which holds a hidden state bit, either 0 or 1. A querying procedure returns for a query set the sum of the states of the queried agents. The goal is t...
详细信息
ISBN:
(数字)9781665471770
ISBN:
(纸本)9781665471770
In the pooled data problem we are given a set of n agents, each of which holds a hidden state bit, either 0 or 1. A querying procedure returns for a query set the sum of the states of the queried agents. The goal is to reconstruct the states using as few queries as possible. In this paper we consider two noise models for the pooled data problem. In the noisy channel model, the result for each agent flips with a certain probability. In the noisy query model, each query result is subject to random Gaussian noise. Our results are twofold. First, we present and analyze for both error models a simple and efficient distributed algorithm that reconstructs the initial states in a greedy fashion. Our novel analysis pins down the range of error probabilities and distributions for which our algorithm reconstructs the exact initial states with high probability. Secondly, we present simulation results of our algorithm and compare its performance with approximate message passing (AMP) algorithms that are conjectured to be optimal in a number of related problems.
sensor data is of crucial importance in many IoT scenarios. It is used for online monitoring as well as long term data analytics, enabling countless use cases from damage prevention to predictive maintenance. Multivar...
详细信息
ISBN:
(数字)9781665481403
ISBN:
(纸本)9781665481403
sensor data is of crucial importance in many IoT scenarios. It is used for online monitoring as well as long term data analytics, enabling countless use cases from damage prevention to predictive maintenance. Multivariate sensor time series data is acquired and initially stored close to the sensor, at the edge. It is also beneficial to summarize this data in windowed aggregations at different resolutions. A subset of the resulting aggregation hierarchy is typically sent to a cloud infrastructure, often via intermittent or low bandwidth connections. Consequently, different views on the data exist on different nodes in the edge-to-cloud continuum. However, when querying this data, users are interested in a fast response and a complete, unified view on the data, regardless of which part in the infrastructure continuum they send the query to and where the data is physically stored. In this paper, we present a loosely coupled approach that enables fast range queries on a distributed and hierarchical sensor database. Our system only assumes the possibility of fast local range queries on a hierarchical sensor database. It does not require any shared state between nodes and thus degrades gracefully in case certain parts of the hierarchy are unreachable. We show that our system is suitable for driving interactive data exploration sessions on terabytes of data while unifying the different views on the data. Thus, our system can improve the data analysis experience in many geo-distributed scenarios.
暂无评论