版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Southwest Jiaotong Univ Sch Comp & Artificial Intelligence Sichuan Prov Key Lab Signal & Informat Proc Chengdu 611756 Peoples R China Southwest Jiaotong Univ Sch Informat Sci & Technol Chengdu 611756 Peoples R China
出 版 物:《WIRELESS NETWORKS》 (无线网络)
年 卷 期:2024年第30卷第4期
页 面:2143-2157页
核心收录:
学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Sichuan Science and Technology Program (CN) The 2023 Sichuan Provincial Science and Technology Innovation Seedling Project MZGC20230079
主 题:Modulation recognition Interpretability Network pruning
摘 要:Deep-learning-based automatic modulation recognition techniques have been extensively explored for wireless communication systems, because of their strong feature extraction and classification abilities of deep neural networks. Although the high recognition accuracy and low alarms, they also raise concerns about complexity and interpretability, which can affect the practical deployments. In this paper, we propose a randomly perturbation convolutional kernel activation mapping (RPCKAM) strategy to explain modulation recognition networks, and the RPCKAM-based filter pruning method to compress modulation recognition networks, combining the reasoning and interpretation of decisions made by modulation recognition networks with the compression of network models. The RPCKAM is used to evaluate the importance of convolutional kernels, and then the importance ranking is used to provide a basis for network pruning. Moreover, the effect of pruning is verified with VGG16 and ResNet34 as the baseline networks. The experimental results reflect that the RPCKAM-based convolutional kernel pruning can compress the model on a large scale while maintaining high accuracy, and can even improve the modulation recognition accuracy within a certain compression range. Specifically, the accuracy improvement is close to 1% at most in the VGG16, and it is also close to 0.5% at most in the ResNet34. Furthermore, by analyzing the performance changes before and after model compression, it is found that the compression effect of ResNet34 is better than that of VGG16, and the performance of the RPCKAM-based pruning method outperforms that of the Grad-CAM-based pruning method.