Combining cell-free massive multiple-input multiple-output (CF-mMIMO) and mobile edge computing (MEC) facilitates the processing of compute-intensive and latency-sensitive tasks in the distributed IoT. For MEC-enabled...
详细信息
Biological and artificial agents need to deal with constant changes in the real world. We study this problem in four classical continuous control environments, augmented with morphological perturbations. Learning to l...
详细信息
ISBN:
(纸本)9781713871088
Biological and artificial agents need to deal with constant changes in the real world. We study this problem in four classical continuous control environments, augmented with morphological perturbations. Learning to locomote when the length and the thickness of different body parts vary is challenging, as the control policy is required to adapt to the morphology to successfully balance and advance the agent. We show that a control policy based on the proprioceptive state performs poorly with highly variable body configurations, while an (oracle) agent with access to a learned encoding of the perturbation performs significantly better. We introduce DMAP, a biologically-inspired, attention-based policy network architecture. DMAP combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers. Despite not having access to the (hidden) morphology information, DMAP can be trained end-to-end in all the considered environments, overall matching or surpassing the performance of an oracle agent. Thus DMAP, implementing principles from biological motor control, provides a strong inductive bias for learning challenging sensorimotor tasks. Overall, our work corroborates the power of these principles in challenging locomotion tasks. The code is available at the following link: DMAP
In this work, we introduce an edge intelligence layer into the traditional Safety-as-a-Service (Safe-aaS) platform for Social Internet of Vehicles (SIoV) networks, to minimize the network latency incurred in delivery ...
详细信息
ISBN:
(纸本)9781538674628
In this work, we introduce an edge intelligence layer into the traditional Safety-as-a-Service (Safe-aaS) platform for Social Internet of Vehicles (SIoV) networks, to minimize the network latency incurred in delivery of decisions. The prior announcement of safety-related information in social IoV environment, minimizes the rate of accidents to a significant extent. Safe-aaS provides customized safety-related decisions dynamically to the end-users. On the other hand, the timely delivery of accurate decisions to the end-users in a social IoV network is a challenging task. We introduce the concept of edge servers in the edge layer of Safe-aaS, such that the bandwidth required for uploading data is minimized, and the problems associated with processing, storage, and complex analysis of data are eliminated. We apply Artificial neuralnetwork (ANN) at the edge nodes to select the appropriate edge server and fuzzy logic at the edge server side for the generation of a decision. Here social entities are not humans rather vehicles, distributed edge servers and cloud servers all are acting as intelligent objects. Extensive simulation of our proposed architecture demonstrates that the computing density of edge servers is normally distributed. Additionally, we analyze the classification of the edge servers using training data obtained from the edge nodes and network is tested with the test dataset. We apply fuzzified decision is generated method at the edge sever. Extensive simulation results demonstrate that the delay incurred in delivery of decision is reduced by 90:58%, after introduction of edge intelligence layer.
We propose a throughput-optimal biased backpressure (BP) algorithm for routing, where the bias is learned through a graph neuralnetwork that seeks to minimize end-to-end delay. Classical BP routing provides a simple ...
详细信息
The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective a...
详细信息
The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective anomaly detection is crucial to prevent impending failures and resolve them promptly. distributed data analysis techniques based on machine learning (ML) have recently gained attention in detecting anomalies in microservice systems. ML-based anomaly detection techniques mostly require centralized data collection and processing, which may raise scalability and computational issues in practice. In this paper, we propose an Asynchronous Real-Time Federated Learning (ART-FL) approach for anomaly detection in cloud-based microservice systems. In our approach, edge clients perform real-time learning with continuous streaming local data. At the edge clients, we model intra-service behaviors and inter-service dependencies in multi-source distributed data based on a Span Causal Graph (SCG) representation and train a model through a combination of Graph neuralnetwork (GNN) and Positive and Unlabeled (PU) learning. Our FL approach updates the global model in an asynchronous manner to achieve accurate and efficient anomaly detection, addressing computational overhead across diverse edge clients, including those that experience delays. Our trace-driven evaluations indicate that the proposed method outperforms the state-of-the-art anomaly detection methods by 4% in terms of F-1 -score while meeting the given time efficiency and scalability requirements.
Condition monitoring of mining conveyors is highly essential, not only to reduce economic loss due to sudden component breakdown but also to prevent safety risks to maintenance personnel. The vibration measurement by ...
详细信息
Condition monitoring of mining conveyors is highly essential, not only to reduce economic loss due to sudden component breakdown but also to prevent safety risks to maintenance personnel. The vibration measurement by means of distributed Optical Fibre Sensors (DOFS) is a potential solution to provide cost-effective real-time monitoring of mining conveyors. However, an effective application of detecting early damage and damage progression mainly depends on collection quality of optical signal and robust signal processing methods. Therefore, this study focuses on the identification of an effective signal processing method for real-time monitoring of mining conveyor systems. A prototype conveyor system with various levels of idler damages was built to run experimental tests and to collect optical signals for further analysis. A wavelet-based damage detection method is proposed, and its efficiency is evaluated. Additionally, Artificial neuralnetwork (ANN) is integrated to develop a smart fault detection technology for fast detection and damage classification. The proposed method has 99% accuracy in classify damage condition under varying coveyor operating speed.
Graph neuralnetworks (GNN) have evolved as powerful models for graph representation learning. Many works have been proposed to support GNN training efficiently on GPU. However, these works only focus on a single GNN ...
详细信息
Graph neuralnetworks (GNN) have evolved as powerful models for graph representation learning. Many works have been proposed to support GNN training efficiently on GPU. However, these works only focus on a single GNN training task such as operator optimization, task scheduling, and programming model. Concurrent GNN training, which is needed in the applications such as neuralnetwork structure search, has not been explored yet. This work aims to improve the training efficiency of the concurrent GNN training tasks on GPU by developing fine-grained methods to fuse the kernels from different tasks. Specifically, we propose a fine-grained Sparse Matrix Multiplication (SpMM) based kernel fusion method to eliminate redundant accesses to graph data. In order to increase the fusion opportunity and reduce the synchronization cost, we further propose a novel technique to enable the fusion of the kernels in forward and backward propagation. Finally, in order to reduce the resource contention caused by the increased number of concurrent, heterogeneous GNN training tasks, we propose an adaptive strategy to group the tasks and match their operators according to resource contention. We have conducted extensive experiments, including kernel- and model-level benchmarks. The results show that the proposed methods can achieve up to 2.6X performance speedup.
Social media, with its immediacy and convenience, has become an important channel for people to exchange information. However, this freedom of information dissemination also provides a breeding ground for the spread o...
详细信息
In recent years, the frequent occurrence of phishing scams on Ethereum has posed serious threats to transaction security and the financial safety of users. This paper proposes an Ethereum phishing scam detection metho...
详细信息
In point cloud video recognition (PVR) tasks, deep neuralnetworks (DNNs) have been widely adopted to enhance accuracy. However, real-time processing is hindered due to the increasing volume of points and frames that ...
详细信息
In point cloud video recognition (PVR) tasks, deep neuralnetworks (DNNs) have been widely adopted to enhance accuracy. However, real-time processing is hindered due to the increasing volume of points and frames that require processes. Point clouds represent 3D-shaped discrete objects using a multitude of points. Consequently, these points often exhibit an uneven distribution in the view space, resulting in strong spatial similarity within each point cloud frame. Taking advantage of this observation, this article introduces PRADA, a Point Cloud Recognition Acceleration algorithm via Dynamic Approximation. PRADA approximates and eliminates the similar local pairs' computations and recovers their results by copying dissimilar local pairs' features for speedup with negligible accuracy loss. Furthermore, considering the slow changes in point cloud frames that lead to the high temporal similarity among points across multiple frames, we design PointV, a Point Cloud Video Recognition Acceleration algorithm, to minimize unnecessary computations of similar points in the temporal domain. Moreover, we propose the PRADA and PointV architectures to accelerate the PRADA and PointV algorithms. These two architectures can be integrated to gain higher performance improvement. Our experiments on a wide variety of datasets show that PRADA averagely achieves about 7 x speedup over 1080TI GPU. In addition, the experimental results show that the PointV architecture and the integrated architecture can respectively achieve 11.7x and 13.9x performance improvement with acceptable accuracy compared to the 1080TI GPU.
暂无评论