版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Technol Troyes Comp Sci & Digital Soc Lab F-10300 Troyes France
出 版 物:《IEEE INTERNET OF THINGS JOURNAL》 (IEEE Internet Things J.)
年 卷 期:2024年第11卷第7期
页 面:12228-12239页
核心收录:
学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Grand-Est Region France the European Regional Development Fund (ERDF)
主 题:Task analysis Industrial Internet of Things Energy consumption Quality of service Servers Deep learning Reinforcement learning Computation offloading deep reinforcement learning (DRL) edge computing Industrial Internet of Things (IIoT)
摘 要:The term Industrial Internet of Things (IIoT) was created to describe a specific area of the Internet of Things (IoT) that integrates information and communication technologies (ICTs) like cloud/edge computing, wireless sensor/actuator networks, and connected objects to enable and accelerate the development of Industry 4.0. IIoT applications (e.g., smart manufacturing, remote control of industrial machinery, and critical system monitoring) have various levels of criticality and Quality-of-Service (QoS) requirements. However, the characteristics of data collected by interconnected devices complicate the task of guaranteeing the QoS requirements in terms of latency and reliability in addition to the huge amount of energy consumption. As a potential solution, edge computing offers additional powerful resources in the proximity of the IIoT devices. Hence, the required QoS can be achieved by offloading computation-intensive tasks to edge servers. Moreover, the offloading process needs to be optimized to take full advantage. Unlikely, conventional optimization methods are very complex to be applied in the IIoT context. To overcome this issue, we propose a computation offloading approach based on deep reinforcement learning (DRL) to minimize long-term energy consumption and maximize the number of tasks completed before their tolerant deadlines. We introduce a system with multiple agents to deal with the increasing dimension of the action space, where each IIoT device is represented by its own DRL model. The goal of the model is to maximize a flexible and long-term reward. In addition, the DRL models are trained in the cloud and make decisions online in the edge servers, allowing quick decision making by avoiding iterative online optimization procedures. The performance of the proposed approach is evaluated through simulation. The proposal shows promising results compared to other approaches.