Prompt, uninterrupted, accurate and reliable agricultural information plays an important role for agricultural decision making, which requires stable and efficient data transmission frame. Wireless powered multi-acces...
详细信息
Prompt, uninterrupted, accurate and reliable agricultural information plays an important role for agricultural decision making, which requires stable and efficient data transmission frame. Wireless powered multi-access edge computing (MEC) has recently emerged as a promising paradigm to improve the capability of data transmission with low-power network. Applying wireless powered MEC to agricultural information monitoring will benefit the development of smart agriculture. Management and scheduling of different applications (i.e., computing offloading) is one of the most important influencing factors for the performance of wireless powered MEC network. Of these, the mutual interferences of different WDs are an important factor which should be considered. To address this issue, an online computation offloading method based on convolutional operation is presented in this paper. And a fundamental wireless powered MEC network including one access point (AP) and multiple WDs is constructed to validate the efficacy of this approach. Impacts of three factors (including network size (i.e., the number of WDs), training intervals and memory size) and different application scenarios are studied and their influences on the performance of convolutional operation-based approach are analyzed. Additionally, convolutional operation-based method is compared with the other two offloading approaches (including DROO (a Deep Reinforcement learning-based onlineoffloading) and CD (Coordinate Descent) algorithm). The results indicate that the convolutional operation-based approach is more suitable for large-scale wireless powered MEC networks (e.g., the number of WDs is more than 30) with moderate memory size (=512) and training interval (=50).
One of the missions of fifth generation (5G) wireless networks is to provide massive connectivity of the fast growing number of Internet of Things (IoT) devices. To satisfy this mission, non-orthogonal multiple access...
详细信息
One of the missions of fifth generation (5G) wireless networks is to provide massive connectivity of the fast growing number of Internet of Things (IoT) devices. To satisfy this mission, non-orthogonal multiple access (NOMA) has been recognized as a promising solution for 5G networks to significantly improve the network capacity. Considered as a booster of IoT devices, and in parallel with the development of NOMA techniques, multi-access edge computing (MEC) is also becoming one of the key emerging technologies for 5G networks. In this paper, with an objective of maximizing the computation rate of an MEC system, we investigate the computationoffloading and subcarrier allocation problem in Multi-carrier (MC) NOMA based MEC systems and address it using Deep Reinforcement Learning for online computation offloading (DRLOCO-MNM) algorithm. In particular, the DRLOCO-MNM helps each of the user equipments (UEs) decides between local and remote computation modes, and also assigns the appropriate subcarrier to the UEs in the case of remote computation mode. The DRLOCO-MNM algorithm is especially advantageous over the other machine learning techniques applied on NOMA because it does not require labeled data for training or a complete definition of the channel environment. The DRLOCO-MNM also does avoid the complexity found in many optimization algorithms used to solve channel allocation in existing NOMA related studies. Numerical simulations and comparison with other algorithms show that our proposed module and its algorithm considerably improve the computation rates of MEC systems.
Nowadays, the paradigm of mobile computing is evolving from a centralized cloud model towards Mobile Edge Computing (MEC). In regions without ground communication infrastructure, incorporating aerial edge computing no...
详细信息
Nowadays, the paradigm of mobile computing is evolving from a centralized cloud model towards Mobile Edge Computing (MEC). In regions without ground communication infrastructure, incorporating aerial edge computing nodes into network emerges as an efficient approach to deliver Artificial Intelligence (AI) services to Ground Devices (GDs). The computationoffloading and resource allocation problem within a HAP-assisted MEC system is investigated in this paper. Our goal is to minimize the energy consumption. Considering the randomness and dynamism of the task arrival of GDs and the quality of wireless communication, stochastic optimization techniques are utilized to transform the long-term dynamic optimization problem into a deterministic optimization problem. Subsequently, the problem is further decomposed into three sub-problems which can be solved in parallel. An online Energy Efficient Dynamic offloading (EEDO) algorithm is proposed to address these problems. Then, we conduct the theoretical performance analysis for EEDO. Finally, we carry out parameter analysis and comparative experiments, demonstrating that the EEDO algorithm can effectively reduce system energy consumption while maintaining the stability of the system.
offloadingcomputation-intensive tasks (e.g., blockchain consensus processes and data processing tasks) to the edge/cloud is a promising solution for blockchain-empowered mobile edge computing. However, the traditiona...
详细信息
offloadingcomputation-intensive tasks (e.g., blockchain consensus processes and data processing tasks) to the edge/cloud is a promising solution for blockchain-empowered mobile edge computing. However, the traditional offloading approaches (e.g., auction-based and game-theory approaches) fail to adjust the policy according to the changing environment and cannot achieve long-term performance. Moreover, the existing deep reinforcement learning-based offloading approaches suffer from the slow convergence caused by high-dimensional action space. In this paper, we propose a new model-free deep reinforcement learning-based online computation offloading approach for blockchain-empowered mobile edge computing in which both mining tasks and data processing tasks are considered. First, we formulate the onlineoffloading problem as a Markov decision process by considering both the blockchain mining tasks and data processing tasks. Then, to maximize long-term offloading performance, we leverage deep reinforcement learning to accommodate highly dynamic environments and address the computational complexity. Furthermore, we introduce an adaptive genetic algorithm into the exploration of deep reinforcement learning to effectively avoid useless exploration and speed up the convergence without reducing performance. Finally, our experimental results demonstrate that our algorithm can converge quickly and outperform three benchmark policies.
Mobile Edge Computing (MEC) has become an attractive solution to enhance the computing and storage capacity of mobile devices by leveraging available resources on edge nodes. In MEC, the arrivals of tasks are highly d...
详细信息
Mobile Edge Computing (MEC) has become an attractive solution to enhance the computing and storage capacity of mobile devices by leveraging available resources on edge nodes. In MEC, the arrivals of tasks are highly dynamic and are hard to predict precisely. It is of great importance yet very challenging to assign the tasks to edge nodes with guaranteed system performance. In this article, we aim to optimize the revenue earned by each edge node by optimally offloading tasks to the edge nodes. We formulate the revenue-driven online task offloading (ROTO) problem, which is proved to be NP-hard. We first relax ROTO to a linear fractional programming problem, for which we propose the Level Balanced Allocation (LBA) algorithm. We then show the performance guarantee of LBA through rigorous theoretical analysis, and present the LB-Rounding algorithm for ROTO using the primal-dual technique. The algorithm achieves an approximation ratio of 2(1+xi)ln(d+1) with a considerable probability, where d is the maximum number of process slots of an edge node and xi is a small constant. The performance of the proposed algorithm is validated through both trace-driven simulations and testbed experiments. Results show that our proposed scheme is more efficient compared to baseline algorithms.
暂无评论