咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Hierarchical Intelligence Enab... 收藏

Hierarchical Intelligence Enabled Joint RAN Slicing and MAC Scheduling for SLA Guarantee

作     者:Jia, Yi Zhang, Cheng Li, Nan Huang, Yongming Quek, Tony Q. S. 

作者机构:Southeast University National Mobile Communication Research Laboratory Frontiers Science Center for Mobile Information Communication and Security School of Infomation Science and Engineering Nanjing210096 China Purple Mountain Laboratories Nanjing211111 China Department of Wireless and Device Technology Research China Mobile Research Institute Beijing China Singapore University of Technology and Design Information Systems Technology and Design Pillar Singapore487372 Singapore 

出 版 物:《IEEE Transactions on Communications》 (IEEE Trans Commun)

年 卷 期:2024年

核心收录:

学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学] 0804[工学-仪器科学与技术] 0714[理学-统计学(可授理学、经济学学位)] 0802[工学-机械工程] 0835[工学-软件工程] 0701[理学-数学] 0823[工学-交通运输工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Bandwidth 

摘      要:As a key technology in beyond 5G and future 6G communications, RAN slicing can realize differentiated service level agreement (SLA) guarantees. In this paper, we investigate the multi-slice multi-user RAN slicing. A dynamic bandwidth allocation scheme for RAN slicing is proposed based on hierarchical intelligence, where bandwidth pre-allocation and hyper-parameter tuning for MAC layer schedulers are jointly optimized to maximize the system utility, i.e., the weighted sum of spectrum efficiency (SE) and SLA satisfaction ratio (SSR) of different slices. The problem is formulated as a twin-time scale Markov decision process (MDP), where the bandwidth pre-allocation and the scheduler parameter tuning are performed on a long-term scale (e.g., seconds) and a short-term scale (e.g., 100 ms), respectively, for which we propose a hierarchical twin-time scale Dueling deep Q learning network (TTS-DDQN) algorithm. A new reward-clipping mechanism is proposed to get a better trade-off between stabilized training and higher system utility. In order to improve the robustness to time-varying traffic patterns and non-stationary dynamic environments, we further propose a traffic-aware module for more efficient sampling of the experience pool, and a variational adversarial inverse reinforcement learning (VAIRL) module for reward automation design. Extensive simulations show that the traffic-aware TTS-DDQN in stationary scenarios and the VAIRL module embedded TTS-DDQN in non-stationary scenarios outperform existing typical DQN-based algorithms, hard slicing and non-slicing, etc. © 1972-2012 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分