The pervasive penetration of IoT devices in various domains such as autonomous vehicles, supply chain management, video surveillance, healthcare, industrial automation etc. necessitates for advanced computing paradigm...
详细信息
The pervasive penetration of IoT devices in various domains such as autonomous vehicles, supply chain management, video surveillance, healthcare, industrial automation etc. necessitates for advanced computing paradigms to achieve real time response delivery. Edge computing offers prompt service response via its competent decentralized platform for catering disseminate workload, hence serving as front-runner for competently handling a wide spectrum of IoT applications. However, optimal distribution of workload in the form of incoming tasks to appropriate destinations remains a challenging issue due to multiple factors such as dynamic offloading decision, optimal resource allocation, heterogeneity of devices, unbalanced workload etc in collaborative Cloud-Edge layered architecture. Employing advanced Artificial Intelligence (AI)-based techniques, provides promising solutions to address the complex task assignment problem. However, existing solutions encounter significant challenges, including prolonged convergence time, extended learning periods for agents and inability to adapt to a stochastic environment. Hence, our work aims to design a unified framework for performing computational offloading and resource allocation in diverse IoT applications using Decision Tree Empowered Reinforcement Learning (DTRL) technique. The proposed work formulates the optimization problem for offloading decisions at runtime and allocates the optimal resources for incoming tasks to improve the Quality-of-Service parameters (QoS). The computational results conducted over a simulation environment proved that the proposed approach has the high convergence ability, exploration and exploitation capability and outperforms the existing state-of-the-art approaches in terms of delay, energy consumption, waiting time, task acceptance ratio and service cost.
The Internet of things (IoT) applications are generating large volumes of data, and processing this data securely, reliably, and timely is required for effective decision-making. However, the limited processing capabi...
详细信息
ISBN:
(纸本)9781728190549
The Internet of things (IoT) applications are generating large volumes of data, and processing this data securely, reliably, and timely is required for effective decision-making. However, the limited processing capability of IoT devices is a significant bottleneck in processing these datasets. A potential solution to overcome this challenge is federated learning using unmanned aerial vehicle (UAV) as mobile edge computing (MEC) servers. In this paper, we propose a UAV-aided edge federated learning (UAFL) framework where we utilize UAV-MEC's computation capacity to process some portion of the datasets from the straggling devices (devices which are unable to process their dataset in reasonable time and are lagging, increasing delay in the whole system). We formulate an optimization problem to minimize system delay considering UAV-MEC's computation power, computation and communication power of IoT devices, and quality of service constraints. We transform the proposed problem by introducing auxiliary variables and epigraph form and then solve the problem using concurrent deterministic simplex with root relaxation algorithm. Simulation results show that UAFL outperforms the traditional federated learning and edge-based learning system by approximately 5%.
Mobile-edge computing (MEC) can accelerate computation-intensive applications and emerge as a promising technology for enabling Internet of Things (IoT). MEC improves the processing performance of tasks by assigning t...
详细信息
Mobile-edge computing (MEC) can accelerate computation-intensive applications and emerge as a promising technology for enabling Internet of Things (IoT). MEC improves the processing performance of tasks by assigning them to the edge nodes. However, with massive terminals contending for computation and communication resources simultaneously, how to develop a flexible computational offloading mechanism becomes the fundamental issue of MEC-enabled IoT systems. This article aims to develop an effective computational offloading decision scheme by jointly considering the computational resource and diverse user demands with two goals, i.e., minimizing both the latency and the energy consumption. Specifically, we develop a two-stage computational offloading mechanism, where the computational resources and offloading decisions can be allocated and coordinated with the variation of computation requirements. To achieve the two goals, this work introduces an edge node recommendation model within the cloud-edge-end architecture to reduce the offloading optimization search space. Furthermore, we propose a new computational offloading (CROCA) algorithm based on chemical reaction optimization (CRO) for optimizing offloading utility, which thoroughly considers the competition between mobile device requests and computational resources. Extensive evaluation results demonstrate that the proposed CROCA scheme can effectively improve the computational offloading performance.
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency *** study explores the potential of employing intell...
详细信息
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency *** study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system ***,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference ***,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ***,the analytical expression for the outage probability of the system in Rayleigh channels is derived and *** study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through *** proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link *** simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.
To get beyond the limitations of mobile devices, the mobile cloud is an emerging technology. offloading resource-intensive applications to distant data centres enabled by the cloud is helping us achieve that. Real-tim...
详细信息
To get beyond the limitations of mobile devices, the mobile cloud is an emerging technology. offloading resource-intensive applications to distant data centres enabled by the cloud is helping us achieve that. Real-time mobile user applications suffer while using a remote computing solution since MDs encounter increased network response times and delays. In this paper, we suggest an intelligent computational offloading model for MEC. The suggested model aims to apply the DL technique, which automatically chooses the computing source depending on performance, energy consumption, and workload. These factors are used to select the best edge server. For this, a Modified LSTM is proposed in this work. Additionally, TS on the edge cloud infrastructure are directly impacted by VM availability;as a consequence, VM availability is calculated while managing TS. When allocating resources, VM calculations like Make span, task completion times, resource consumption, and migration costs are considered. The capacity to deliver functioning services in the required amount of time after the task is offloaded to the VM is considered while allocating VM resources. Given that this is an optimization problem, the NBUJ hybrid optimization algorithm is developed to solve this issue. At last, the performance of the developed model is validated with the existing models in terms of fitness, migration cost, makespan, resource utilization and so on. And, it is noted that the developed model attains 97.4% of accuracy and 0.0274% of FNR which validates the performance of the proposed model.
With the rapid growth of data demand and the rise of the Industry 5.0, cloud-edge collaborative computing achieves intelligent connectivity and data sharing between devices for intelligent management and optimization ...
详细信息
With the rapid growth of data demand and the rise of the Industry 5.0, cloud-edge collaborative computing achieves intelligent connectivity and data sharing between devices for intelligent management and optimization of the production process. computational offloading, as a key technology in the cloud-edge collaborative computing environment, provides higher-quality user services and realizes higher data requirements for intelligent manufacturing and production. In this paper, we proposes an Improved Efficient Alternating Direction Method of Multipliers (IEADMM) for computational offloading in cloud-edge collaborative computing environment, aiming to reduce the total system cost with the optimization goal of minimizing the total time delay. The proposed IEADMM fully utilizes the Alternating Direction Method of Multipliers ADMM) to minimize the total delay by increasing the convergence speed. The experimental result shows that our proposed IEADMM has good performance and practicality for computational offloading in cloud-edge collaborative computing environment, and can be widely applied in practical application scenarios such as intelligent monitoring, Internet of Things (IoT), mobile games, etc.
Vehicular mobile edge computing (vMEC) and nonorthogonal multiple access (NOMA) have emerged as promising technologies for enabling low-latency and high-throughput applications in vehicular networks. In this article, ...
详细信息
Vehicular mobile edge computing (vMEC) and nonorthogonal multiple access (NOMA) have emerged as promising technologies for enabling low-latency and high-throughput applications in vehicular networks. In this article, we propose a novel multiagent deep deterministic policy gradient (MADDPG) approach for resource allocation in NOMA-based vMEC systems. Our approach leverages deep reinforcement learning (DRL) to enable vehicles to offload computation-intensive tasks to nearby edge servers, optimizing resource allocation decisions while ensuring low-latency communication. We introduce an attention mechanism within the MADDPG model to dynamically focus on relevant information from the input state and joint actions, enhancing the model's predictive accuracy. Additionally, we propose an attention-based experience replay method to expedite network convergence. The simulation results highlight the effectiveness of multiagent reinforcement learning (MARL) algorithms, such as MADDPG with attention, in achieving better convergence and performance in various scenarios. The influence of different model parameters, such as input data volumes, task load levels, and resource configurations, on optimization results is also evident. The decision making processes of agents are dynamic and depend on factors specific to the task and environment.
Optimizing computational offloading in Mobile Edge Computing (MEC) environments presents a multifaceted challenge requiring innovative solutions. Soft computing, recognized for its ability to manage uncertainty and co...
详细信息
Optimizing computational offloading in Mobile Edge Computing (MEC) environments presents a multifaceted challenge requiring innovative solutions. Soft computing, recognized for its ability to manage uncertainty and complexity, emerges as a promising approach for addressing the dynamic multi-objective evaluation inherent in computational offloading scenarios. This paper conducts a comprehensive review and analysis of soft computing approaches for Dynamic Multi-Objective Evaluation of computational offloading (DMOECO), aiming to identify trends, analyze existing literature, and offer insights for future research directions. Employing a systematic literature review (SLR) methodology, we meticulously scrutinize 50 research articles and scholarly publications spanning from 2016 to November 2023. Our review synthesizes advancements in soft computing techniques, including fuzzy logic, neural networks, evolutionary algorithms, and probabilistic reasoning, as applied to computational offloading optimization within MEC environments. Within this comprehensive review, existing approaches are categorized and analyzed into distinct research lines based on methodologies, objectives, evaluation metrics, and application domains. The evolution of soft computing-based DMOECO strategies is emphasized, showcasing their effectiveness in dynamically balancing various computational objectives, including energy consumption, latency, throughput, user experience, and other pertinent factors in computational offloading scenarios. Key challenges, including scalability issues, lack of real-world deployment validation, and the need for standardized evaluation benchmarks, are identified. Insights and recommendations are provided to enhance computational offloading optimization. Furthermore, collaborative efforts between academia and industry are advocated to bridge the theoretical developments with practical implementations. This study pioneers the use of SLR methodology, offering valuable perspec
Many advancements are being made in vehicular networks, such as self-driving, dynamic route scheduling, real-time traffic condition monitoring, and on-board infotainment services. However, these services require high ...
详细信息
Many advancements are being made in vehicular networks, such as self-driving, dynamic route scheduling, real-time traffic condition monitoring, and on-board infotainment services. However, these services require high computation power and precision and can be met using mobile edge computing (MEC) mechanisms for vehicular networks. MEC operates through the edge servers available at the roadside, also known as roadside units (RSU). MEC is very useful for vehicular networks because it has extremely low latency and supports operations that require near-real-time access to rapidly changing data. This paper proposes an efficient computational offloading, smart division of tasks, and cooperation among RSUs to increase service performance and decrease the delay in a vehicular network via MEC. The computational delay is further reduced by parallel processing. In the division of tasks, each task is divided into two sub-components which are fed to a deep neural network (DNN) for training. Consequently, this reduces the overall time delay and overhead. We also adopt an efficient routing policy to deliver the results through the shortest path to improve service reliability. The offloading, computing, division, and routing policies are formulated, and a model-based DNN approach is used to obtain an optimal solution. Simulation results prove that our proposed approach is suitable in a dynamic environment. We also compare our results with the existing state-of-the-art, showing that our proposed approach outperforms the existing schemes.
The progress in computational offloading is heavily pushing the development of the modern Information and Communications Technology domain. The growth in resource-constrained Internet of Things devices demands the dev...
详细信息
The progress in computational offloading is heavily pushing the development of the modern Information and Communications Technology domain. The growth in resource-constrained Internet of Things devices demands the development of new computational offloading strategies to be sustainably integrated in beyond 5G networks. One of the solutions to said demand is enabling Mobile Edge Computing (MEC) powered by advanced methods of Machine Learning (ML). This paper proposes the application of ML-powered computational offloading strategy in a wireless cellular network by applying the traditional fundamental Travelling Salesman Problem (TSP) on computational offloading location selection. The main specificity of the proposed approach is the use of imagery data. Thus, the paper executes a literature review to identify existing strategies. It further proposes a novel method utilizing the location-like imagery data to identify the most suitable computational location by executing the search for an identified route between locations using the proposed Deep Learning (DL) model. The model was evaluated and achieved MAE - 1,575, MSA - 10,119,205, R-2 - 0.98 on the testing dataset, which outperforms or is comparable with other well-known architectures. Moreover, the training time is proven to be 2-10 times faster. Interestingly, the MAE values are relatively low compared to the target values that should be predicted (despite rather high MSE results), which is confirmed by the almost perfect R-2 value. It is concluded that the proposed neural network can predict the target values, and this solution can be applied to real-world tasks.
暂无评论