Recently, Deep Neural Networks (DNNs) have been used extensively for Automatic Modulation Classification (AMC). Due to their high complexity, DNNs are typically unsuitable for deployment at resource-constrained edge n...
详细信息
Recently, Deep Neural Networks (DNNs) have been used extensively for Automatic Modulation Classification (AMC). Due to their high complexity, DNNs are typically unsuitable for deployment at resource-constrained edge networks. They are also vulnerable to adversarial attacks, which is a significant security concern. This work proposes a Rotated Binary Large ResNet (RBLResNet) for AMC that can be deployed at the edge network because of its low complexity. The performance gap between the RBLResNet and existing architectures with floating-point weights and activations can be closed by two proposed ensemble methods: (i) Multilevel Classification (MC) and (ii) bagging multiple RBLResNets. The MC method achieves an accuracy of 93.39% at 10dB over all the 24 modulation classes of the Deepsig dataset. This performance is comparable to state-of-the-art performances, with 4.75 times lower memory and 1214 times lower computation. Furthermore, RBLResNet exhibits high adversarial robustness compared to existing DNN models. The proposed MC method employing RBLResNets demonstrates a notable adversarial accuracy of 87.25% across a diverse spectrum of Signal-to-Noise Ratios (SNRs), outperforming existing methods and well-established defense mechanisms to the best of our knowledge. Low memory, low computation, and the highest adversarial robustness make it a better choice for robust AMC in low-power edge devices.
This paper shows that integrating intermediate features brings superior efficiency to transfer learning without hindering performance. The implementation of our proposed Fuse Tune is flexible and simple: we preserve a...
详细信息
ISBN:
(纸本)9789819985395;9789819985401
This paper shows that integrating intermediate features brings superior efficiency to transfer learning without hindering performance. The implementation of our proposed Fuse Tune is flexible and simple: we preserve and concatenate features in the pre-trained backbone during forward propagation. A hierarchical feature decoder with shallow attentions is then designed to further dig the rich relations across features. Since the deep buried features are entirely exposed, back propagation in the cumbersome backbone is no longer required, which greatly improves the training efficiency. We make the observation that Fuse Tune performs especially well in self-supervised and label-scarce scenarios, proving its adaptive and robust representation transfer ability. Experiments demonstrate that Fuse Tune could achieve 184% temporal efficiency and 169% memoryefficiency lead compared to Prompt Tune in ImageNet. Our code is open sourced in https://***/JeavanCode/FuseTune.
Many achievements have been made on learning to hash for uni-modal and cross-modal retrieval. However, it is still an unsolved problem that how to directly and efficiently learn discriminative discrete hash codes for ...
详细信息
Many achievements have been made on learning to hash for uni-modal and cross-modal retrieval. However, it is still an unsolved problem that how to directly and efficiently learn discriminative discrete hash codes for the multimedia retrieval, where both query and database samples are represented with heterogeneous multi-modal features. With this motivation, we propose a Fast Discrete Collaborative Multi-modal Hashing (FDCMH) method in this paper. We first propose an efficient collaborative multi-modal mapping that first transforms heterogeneous multi-modal features into the unified factors to exploit the complementarity of multi-modal features and preserve the semantic correlations in multiple modalities with linear computation and space complexity. Such shared factors also bridge the heterogeneous modality gap and remove the inter-modality redundancy. Further, we develop an asymmetric hashing learning module to simultaneously correlate the learned hash codes with low-level data distribution and high-level semantics. In particular, this design could avoid the challenging symmetric semantic matrix factorization and O(n(2)) memory cost (n is the number of training samples). It can support both computation and memory efficient discrete hash optimization. Experiments on several public multimedia retrieval datasets demonstrate the superiority of the proposed approach compared with state-of-the-art hashing techniques, in terms of both model learning efficiency and retrieval accuracy.
暂无评论