版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanjing Univ Posts & Telecommun Key Lab Broadband Wireless Commun & Sensor Networ Minist Educ Nanjing 210003 Jiangsu Peoples R China Nanjing Univ Posts & Telecommun Natl Engn Res Ctr Commun & Network Technol Nanjing 210003 Jiangsu Peoples R China Nanjing Univ Posts & Telecommun Jiangsu Key Lab Wireless Commun Nanjing 210003 Jiangsu Peoples R China Loughborough Univ Wolfson Sch Mech Elect & Mfg Engn Loughborough LE11 3TU Leics England Nanjing Tech Univ Coll Comp Sci & Technol Nanjing 210003 Jiangsu Peoples R China
出 版 物:《IEEE ACCESS》 (IEEE Access)
年 卷 期:2019年第7卷
页 面:7599-7605页
核心收录:
基 金:Priority Academic Program Development of Jiangsu Higher Education Institutions, National Natural Science Foundation of China [61701258, 61501223, 61501248] Jiangsu Specially Appointed Professor Program [RK002STP16001] Program for Jiangsu Six Top Talent [XYDXX-010] Program for High-Level Entrepreneurial and Innovative Talents Introduction [CZ0010617002] Natural Science Foundation of Jiangsu Province [BK20170906] Natural Science Foundation of Jiangsu Higher Education Institutions [17KJB510044] NUPTSF [XK0010915026] 1311 Talent Plan of Nanjing University of Posts and Telecommunications, UK EPSRC [EP/N007840/1] Leverhulme Trust [RPG-2017-129] EPSRC [EP/N007840/1] Funding Source: UKRI
主 题:MIMO beamforming deep learning unsupervised learning network pruning
摘 要:In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input-multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the APoZ -based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.