咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TC-MSNet: A Multi-feature Extr... 收藏

TC-MSNet: A Multi-feature Extraction Neural Framework for Automatic Modulation Recognition

作     者:Wang, Yang Feng, Yongxin Song, Bixue Tian, Binghe Qian, Bo 

作者机构:Shenyang Ligong Univ Sch Informat Sci & Engn 6 Nanping Middle Rd Shenyang 110159 Liaoning Peoples R China 

出 版 物:《WIRELESS PERSONAL COMMUNICATIONS》 (Wireless Pers Commun)

年 卷 期:2025年第140卷第1-2期

页      面:377-398页

核心收录:

学科分类:0810[工学-信息与通信工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 

基  金:National Natural Science Foundation of China Liaoning Education Department [LJKZ0242] 

主  题:Automatic modulation recognition Deep learning Feature extraction Feature fusion Attention mechanism 

摘      要:In the approaches of Automatic Modulation Recognition (AMR), modulation modes with similar characteristics are prone to be confused by the adverse factors such as noise, inevitably bringing challenges to the accuracy of recognition. Aiming at this kind of problems, this paper proposes TC-MSNet which is a novel multi-scale spatial-temporal features collaboration neural network based on deep learning. TC-MSNet extracts the Temporal Correlation (TC) features and the Multi-Scale Spatial (MSS) features respectively for enhancing the diversity of features extraction, meanwhile, a fusion strategy constructed with selective kernel attention mechanism implement sufficiently collaboration of multi-scale features. Additionally, the Bilinear-Pooling Mechanism (BPM) is adopted to capture the differentiation of fine-grained features, eliminating the confusion among distinct modulation modes to promote AMR accuracy. The simulation experimental results show that, on the RML2016.10a and RML2016.10b, the proposed TC-MSNet not only has better performance compared with existing deep learning-based AMR methods but also possesses stronger ability to eliminate confusion of similar features.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分