作者:
Du, RuizhongGao, YanHebei Univ
Sch Cyber Secur & Comp Hebei Key Lab Highly Trusted Informat Syst Baoding Peoples R China
Under mobile edge computing (MEC), collaborative computing is a new paradigm to optimize edge network load and server resource allocation. However, there are still issues such as resource imbalance and unreliable iden...
详细信息
ISBN:
(数字)9781665402620
ISBN:
(纸本)9781665402620
Under mobile edge computing (MEC), collaborative computing is a new paradigm to optimize edge network load and server resource allocation. However, there are still issues such as resource imbalance and unreliable identity when collaborative nodes provide computing and caching services for computing devices. Therefore, an efficient and reliable MEC network should be established to manage these challenges. In this paper, a dynamic trusted collaboration scheme is proposed. Specifically, the transaction supervision of nodes in the MEC scenario is first realized through blockchain-based mobile edge computing (BMEC) technology. In BMEC, a management scheme is established laying the foundation for building a trusted MEC environment. Second, a collaborative model is designed. On this basis, it is formulated as an optimization problem based on computing, communication, caching, and energy consumption. The formulation problem is verified to be NP complete. Thus, an improved heuristic algorithm is adopted in this study to solve this problem. Ultimately, the experimental results demonstrate that DTC not only has good performance but also can effectively enhance the reliability of the MEC network compared with other cooperative schemes.
In the evolving Artificial Intelligence (AI) era, the need for real-time algorithm processing in marine edge environments has become a crucial challenge. Data acquisition, analysis, and processing in complex marine si...
详细信息
ISBN:
(纸本)9798350363074;9798350363081
In the evolving Artificial Intelligence (AI) era, the need for real-time algorithm processing in marine edge environments has become a crucial challenge. Data acquisition, analysis, and processing in complex marine situations require sophisticated and highly efficient platforms. This study optimizes real-time operations on a containerized distributed processing platform designed for Autonomous Surface Vehicles (ASV) to help safeguard the marine environment. The primary objective is to improve the efficiency and speed of data processing by adopting a microservice management system called DataX. DataX leverages containerization to break down operations into modular units, and resource coordination is based on Kubernetes. This combination of technologies enables more efficient resource management and real-time operations optimization, contributing significantly to the success of marine missions. The platform was developed to address the unique challenges of managing data and running advanced algorithms in a marine context, which often involves limited connectivity, high latencies, and energy restrictions. Finally, as a proof of concept to justify this platform's evolution, experiments were carried out using a cluster of single-board computers equipped with GPUs, running an AI-based marine litter detection application and demonstrating the tangible benefits of this solution and its suitability for the needs of maritime missions.
In a distributed quantum computation, a large quantum circuit gets sliced into sub -circuits that must be executed at the same time on a quantum computing cluster. The interactions between the sub -circuits are usuall...
详细信息
ISBN:
(纸本)9798331541378
In a distributed quantum computation, a large quantum circuit gets sliced into sub -circuits that must be executed at the same time on a quantum computing cluster. The interactions between the sub -circuits are usually defined in terms of non -local gates that require shared entangled pairs and classical communication between different nodes. Assuming that multiple end users submit distributed quantum computing (DQC) jobs to the cluster, an execution management problem arises. This is actually a parallel job scheduling problem, in which a set of jobs of varying processing times need to be scheduled on multiple machines while trying to minimize the length of the schedule. In a previous work, we started investigating the problem considering random circuits and approximating the length of each DQC job with the number of layers of the circuit. In this work, we put forward the study by considering a more realistic model for estimating DQC job lengths and by performing evaluations with circuits of practical interest.
To improve the task scheduling efficiency in computing power networks, this paper proposes a global task scheduling method based on network measurement and prediction in computing power networks (GTS-MP), which select...
详细信息
作者:
Hu, YaoKeio University
Research Institute for Digital Media and Content Hiyoshi Campus Yokohama223-8523 Japan
A random walk is a process in which a random walker takes consecutive steps in space at equal intervals of time, with the length and direction of each step determined independently. Models related to random walks have...
详细信息
Large-scale deployment of Internet of Things (IoT) devices provides efficient data collection and control capabilities in the smart grid, while edge computing plays a key role in increasing the speed of data processin...
详细信息
The advent of the Internet of Things (IoT) and machine-to-machine (M2M) communication provide a system for collecting and manipulating big data and a platform for sensing, actuating, and automating the environment. Io...
详细信息
ISBN:
(纸本)9781665457194
The advent of the Internet of Things (IoT) and machine-to-machine (M2M) communication provide a system for collecting and manipulating big data and a platform for sensing, actuating, and automating the environment. IoT, M2M communication, social networking, and mass multimedia severely strain the communication infrastructure. Thus, the archaic communication frameworks require necessary improvements. One such improvement is the simultaneous usage of parallel communication links of differing radio access networks. This paper presents a Machine Learning (ML) optimization for link selection and use in CoopNet, a horizontal programmable communication architecture. Programmable networking paves the way for advancing communication to improve performance, reliability, security, and policy-based applications, including network decoupling. The ML implementation in CoopNet improves throughput by over 17%, delay by 10%, and reduces individual link utilization.
One of the fastest-growing research areas is the recognition of sign language. In this field, many novel techniques have lately been created. People who are deaf-dumb primarily communicate using sign language. Real-ti...
详细信息
Deep learning (DL) training or retraining on an edge computingnetwork is promising due to its local computation advantage over the cloud. Data and model parallel DL training is a solution to handle the challenges of ...
ISBN:
(纸本)9798350363999;9798350364002
Deep learning (DL) training or retraining on an edge computingnetwork is promising due to its local computation advantage over the cloud. Data and model parallel DL training is a solution to handle the challenges of the large and increasing scale of deep neural network (DNN) models and resource constraints of edge devices. However, its training accuracy by a certain time will be greatly affected or the training process will slow down if one model partition experiences faults due to causes such as edge node mobility, resource unavailability and hardware errors. Though task replication strategies have been used in wireless networks or the TensorFlow machine learning framework, they cannot handle DL job unique features (i.e., large amounts of intermediate results, high accuracy requirement and DL job dataflow) in such DL training. To address this problem, we propose a proactive Fault Tolerant data and model parallel DL system at the Edge (FTLE) to improve DL training time and accuracy by a certain time. FTLE proactively calculates and predicts the fault probability of each model partition using a DL method. It then decides the number of replicas of each partition based on its fault probability, its significance on the accuracy and its importance in the DL job dataflow. Then, FTLE assigns the replicas to edge devices, and schedules a replica to run before each predicted fault occurs based on the DL job dataflow in order to reduce training time and replication time overhead. Our trace-driven experiment on real devices shows that for one epoch training, FTLE shows around 59% reduction in training time, 45% increase in accuracy, and 65% reduction in replication overhead compared to other methods.
With the rapid information and communications technology growth and continuous invention, the concept of parallelcomputing has become the core of computer science, and its capabilities are continually documented in p...
详细信息
暂无评论