deepneuralnetworks (DNNs) are widely used to analyze the abundance of data collected by massive Internet-of-Thing (IoT) devices. The traditional approaches usually send the data to the cloud and process the DNN infe...
详细信息
ISBN:
(纸本)9781728171227
deepneuralnetworks (DNNs) are widely used to analyze the abundance of data collected by massive Internet-of-Thing (IoT) devices. The traditional approaches usually send the data to the cloud and process the DNN inference on the powerful cloud servers but suffer from long network latency. Therefore, edge computing has emerged to reduce network latency by offloading the computation from the cloud to the edge. However, a single resource-constrained edge device is unable to process real-time DNN inference. Thus, we devise a collaborative edge computing system CoopAI to distribute DNN inference over several edge devices with a novel model partition technique to allow the edge devices to prefetch the required data in advance to compute the inference cooperatively in parallel without exchanging data. Subsequently, we present a new optimization problem to minimize the completion time of distributed DNN inference. An innovative algorithm is then proposed to intelligently partition the model into the proper number and sizes of blocks, deploy them on a suitable number of edge devices, and run them in different rounds. The numerical results manifest that our algorithm outperforms the traditional approach by 20%-30% on the completion time.
Recent advances in robotics, enable the evolution of the manufacturing industry to adapt to the new environments of the industry itself. A distributed deep neural network (DDNN) has been improved in cloud services as ...
详细信息
ISBN:
(纸本)9780738125176
Recent advances in robotics, enable the evolution of the manufacturing industry to adapt to the new environments of the industry itself. A distributed deep neural network (DDNN) has been improved in cloud services as a distributed robot control system that can be accessed over protocols such as HTTP, TCP, LwM2M, etc. The conventional deepneuralnetworks (DNN) method has complex computational matrices that will be affected the training accuracy, training runtime, and so on. Microservices-based DNN is deployed to reduce the training workloads of deepneuralnetworks in a single node that can be improved by Containerized DNN architecture. This paper describes a microservices-based for the deepneuralnetwork as a Services (DNNaaS) that has been implemented to encourage an inverse kinematic solution of the delta robot control system. Each container trained with non-identical inverse kinematic motions data of delta robot within the data length used is 200 motions data for each node. The proposed method was trained in containers C1 until C4. The training process of containerized DNN performed with accuracy equals to 95.76% and loss equals to 1.06% in containers C1 until C4 for 13.7279 seconds. The proposed method was tested to transmitted 100 IK motions data over *** with 1.7kB in 336 milliseconds.
The primary obstacle for the wireless industry is meeting the growing demand for cellular services, which necessitates the deployment of numerous femto base stations (FBSs) in ultra-dense networks. Effective resource ...
详细信息
The primary obstacle for the wireless industry is meeting the growing demand for cellular services, which necessitates the deployment of numerous femto base stations (FBSs) in ultra-dense networks. Effective resource distribution among densely and randomly distributed FBSs in ultra-dense is difficult, mainly because of intensified interference problems. The K-means clustering is improved by employing the Davies Bouldin index, which separates the clusters to prevent overlapping and mitigate interference. The elbow approach is utilized to determine the optimal number of clusters. Afterward, attention is directed toward addressing efficient resource allocation through a distributive methodology. The proposed approach makes use of a replay buffer-based multi-agent framework and uses the generative adversarial networks deep distributional Q-network (GAN-DDQN) to efficiently model and learn state-action value distributions for intelligent resource allocation. To further improve control over the training error, the distributions are estimated by approximating a whole quantile function. The numerical results validate the effectiveness of both the proposed clustering method and the GAN-DDQN-based resource allocation scheme in optimizing throughput, fairness, energy efficiency, and spectrum efficiency, all while maintaining the QoS for all users.
End-to-end network slicing is a new concept for 5G+ networks, dividing the network into slices dedicated to different types of services and customized for their tasks. A key task, in this context, is satisfying servic...
详细信息
ISBN:
(纸本)9798350371000;9798350370997
End-to-end network slicing is a new concept for 5G+ networks, dividing the network into slices dedicated to different types of services and customized for their tasks. A key task, in this context, is satisfying service level agreements (SLA) by forecasting how many resources to allocate to each slice. The increasing complexity of the problem setup, due to service, traffic, SLA, and network algorithm diversity, makes resource allocation a daunting task for traditional (model-based) methods. Hence, data-driven methods have recently been explored. Although such methods excel at the application level (e.g., for image classification), applying them to wireless resource allocation is challenging. Not only are the required latencies significantly lower (e.g., for resource block allocation per OFDM frame), but also the cost of transferring raw data across the network to centrally process it with a heavy-duty deepneuralnetwork (DNN) can be prohibitive. For this reason, distributed DNN (DDNN) architectures have been considered, where a subset of DNN layers is executed at the edge (in the 5G network), to improve speed and communication overhead. If it is deemed that a "good enough" allocation has produced locally, the additional latency and communication are avoided;if not, intermediate features produced at the edge are sent through additional DNN layers (in a central cloud). In this paper, we propose a distributed DNN architecture for this task based on LSTM, which excels at forecasting demands with long-term dependencies, aiming to avoid under-provisioning and minimize over-provisioning. We investigate (i) joint training (offline) of the local and remote layers, and (ii) optimizing the (online) decision mechanism for offloading samples either locally or remotely. Using a real dataset, we demonstrate that our architecture resolves nearly 50% of decisions at the edge with no additional SLA penalty compared to centralized models.
In this paper, it is proposed that a novel strategy of based hierarchical data distribution and deepneuralnetworks distribution over edge and end devices. In the Industrial Internet of Things environment, deep learn...
详细信息
ISBN:
(纸本)9781728186160
In this paper, it is proposed that a novel strategy of based hierarchical data distribution and deepneuralnetworks distribution over edge and end devices. In the Industrial Internet of Things environment, deep learning tasks such as smoke and fire classification based on convolutional neuralnetwork usually need to be performed on edge servers and end devices, which have limited computing resources, while edge servers have abundant computing resources. While being able to accommodate inference of a deepneuralnetwork (DNN) at the edge server, a distributed deep neural network (DDNN) also allows localized inference using a portion of the neuralnetwork at the end sensing devices. Therefore, this article proposed the distributed strategy can dynamically adjust network layers and data allocation proportion of end devices and edge servers according to different tasks to shorten the data processing time. A joint optimization problem is proposed to minimize the total delay, which is affected by the complexity of the DL model, the inference error rate, the computing power of the end devices and the edge servers. An analytical solution of a closed solution is derived and an optimal distributed data allocation and neuralnetwork allocation algorithm is proposed
暂无评论