In the current mobile communication field, 5G technology represents a revolutionary progress, providing users with higher data transmission speed, lower delay and better network connection quality. However, with the c...
详细信息
In cloud computing, the resources and memory are dynamically allocated to the user based on their needs. Security is considered as a major issue in cloud as the use of cloud is increased. Intrusion detection is consid...
详细信息
In cloud computing, the resources and memory are dynamically allocated to the user based on their needs. Security is considered as a major issue in cloud as the use of cloud is increased. Intrusion detection is considered as a significant tool to develop a reliable and secure cloud environment. Performing intrusion detection in cloud is a difficult task because of its distributed nature and extensive usage. Intrusion detection system (IDS) is widely considered to find the malicious actions in network. In cloud, most conventional IDS are vulnerable to attacks and have no capability for maintaining the balance between sensitivity and accuracy. Thus, we proposed an effective dragonfly improved invasive weed optimization-based Shepard convolutional neuralnetwork (DIIWO-based ShCNN) to detect the intruders and to mitigate the attacks in cloud paradigm and are more feasible to detect the intruders with ShCNN. The proposed method outperforms the existing method with maximum accuracy of 0.960%, sensitivity of 0.967%, and specificity 0.961%, respectively.
This paper introduces to a structured application of the One-Class approach and the One-Class-One-network model for supervised classification tasks, specifically addressing a vowel phonemes classification case study w...
详细信息
ISBN:
(纸本)9798350354249;9798350354232
This paper introduces to a structured application of the One-Class approach and the One-Class-One-network model for supervised classification tasks, specifically addressing a vowel phonemes classification case study within the Automatic Speech Recognition research field. Through pseudo-neural Architecture Search and Hyper-Parameters Tuning experiments conducted with an informed grid-search methodology, we achieve classification accuracy comparable to nowadays complex architectures (90.0 - 93.7%). Despite its simplicity, our model prioritizes generalization of language context and distributed applicability, supported by relevant statistical and performance metrics. The experiments code is openly available at our GitHub.
Hybrid microgrids are emerging technology that integrates the key characteristics of AC and DC microgrids without requiring significant changes to the distribution network. However, its protection faces a lot of chall...
详细信息
Graph neuralnetworks (GNNs) have experienced rapid advancements in recent years due to their ability to learn meaningful representations from graph data structures. Federated Learning (FL) has emerged as a viable mac...
详细信息
ISBN:
(纸本)9783031697654;9783031697661
Graph neuralnetworks (GNNs) have experienced rapid advancements in recent years due to their ability to learn meaningful representations from graph data structures. Federated Learning (FL) has emerged as a viable machine learning approach for training a shared model on decentralized data, addressing privacy concerns while leveraging parallelism. Existing methods that address the unique requirements of federated GNN training using remote embeddings to enhance convergence accuracy are limited by their diminished performance due to large communication costs with a shared embedding server. In this paper, we present OpES, an optimized federated GNN training framework that uses remote neighbourhood pruning, and overlaps pushing of embeddings to the server with local training to reduce the network costs and training time. The modest drop in per-round accuracy due to pre-emptive push of embeddings is out-stripped by the reduction in per-round training time for large and dense graphs like Reddit and Products, converging up to approximate to 2x faster than the state-of-the-art technique using an embedding server and giving up to 20% better accuracy than vanilla federated GNN learning.
Deep Learning models, and specifically Recurrent neuralnetworks, have been successfully applied to time series classification in many applications, including Structural Health Monitoring. A relatively new field of re...
详细信息
The current data mining algorithm has the problem of imperfect data mining function, which leads to the algorithm taking too long time. This paper designs a data mining algorithm based on BP neuralnetwork. Analyze th...
详细信息
GNN models on heterogeneous graphs have achieved state-of-the-art (SOTA) performance in various graph tasks such as link prediction and node classification. Despite their success in providing SOTA results, popular GNN...
详细信息
ISBN:
(纸本)9798350311990
GNN models on heterogeneous graphs have achieved state-of-the-art (SOTA) performance in various graph tasks such as link prediction and node classification. Despite their success in providing SOTA results, popular GNN libraries, such as PyG and DGL, fail to provide fast and efficient solutions for heterogeneous GNN models. One common key bottlenecks of models like RGAT, RGCN, and HGT is relation-specific linear projection. In this paper, we propose two high-performing tensor operators: gathermm and segment-mm to address the issue. We demonstrate the effectiveness of the proposed operators in training two popular heterogeneous GNN models - RGCN and HGT. Our proposed approaches outperform the full-batch training time of RGCN by up to 3x and mini-batch by up to 2x.
作者:
Wang, QipengHan, MinDalian Univ Technol
Fac Elect Informat & Elect Engn Dalian 116024 Peoples R China Dalian Univ Technol
Minist Educ Key Lab Intelligent Control & Optimizat Ind Equip Dalian 116024 Peoples R China Dalian Univ Technol
Profess Technol Innovat Ctr Distributed Control I Dalian 116024 Peoples R China
In recent years, multivariate time series prediction has attracted extensive research interests. However, the dynamic changes of the spatial topology and the temporal evolution of multivariate variables bring great ch...
详细信息
In recent years, multivariate time series prediction has attracted extensive research interests. However, the dynamic changes of the spatial topology and the temporal evolution of multivariate variables bring great challenges to the spatio-temporal time series prediction. In this paper, a novel Dirichlet graph convolution module is introduced to automatically learn the spatio-temporal representation, and we combine graph attention (GAT) and neural differential equation (NDE) based on nonlinear state transition to model spatio-temporal state evolution of nonlinear systems. Specifically, the spatial topology is revealed by the cosine similarity of node embeddings. The use of multi-layer Dirichlet graph convolution aims to enhance the representation ability of the model while suppressing the phenomenon of over-smoothing or over-separation. The GCN and LSTM-based network is used as the nonlinear operator to model the evolution law of the dynamic system, and the GAT updates the strength of the connection. In addition, the Euler trapezoidal integral method is used to model the temporal dynamics and makes medium and long-term prediction in latent space from the perspective of nonlinear state transition. The proposed model can adaptively mine spatial correlations and discover spatio-temporal dynamic evolution patterns through the coupled NDE, which makes the modeling process more interpretable. Experiment results demonstrate the effectiveness of spatio-temporal dynamic discovery on predictive performance.
Deep neuralnetwork (DNN)-based synthetic aperture radar (SAR) automatic target recognition (ATR) methods have been shown to be vulnerable, particularly in low-data scenarios, also known as generalized few-shot learni...
详细信息
Deep neuralnetwork (DNN)-based synthetic aperture radar (SAR) automatic target recognition (ATR) methods have been shown to be vulnerable, particularly in low-data scenarios, also known as generalized few-shot learning situations. This vulnerability significantly restricts their real-world applications. Therefore, enhancing the generalization capability of DNN-based methods is crucial for practical SAR target recognition. To address this challenge, we observe that misclassified samples in generalized few-shot SAR target recognition often exhibit similar behaviors to adversarial samples-they tend to be located near the classification boundary and are distant from their respective class centers. To tackle this issue, we propose a max-Mahalanobis centers (MMC)-guided adversarial network (M2cgAN). This approach focuses on learning compact representations that cluster around preset optimal centers for different targets, thereby reducing the likelihood of misclassifying SAR images. Specifically, we first utilize MMC due to its property that if the input is distributed according to a max-Mahalanobis distribution, a linear classifier will achieve optimal robustness against misclassified samples. Building on this, we leverage this property in generalized few-shot SAR ATR by employing a DNN, specifically ResNet12, to project SAR images onto the preset optimal centers defined by MMC. Finally, an adaptive area perception module has been developed to extract the most discriminative regions in SAR images, effectively reducing the impact of strong scattering points in complex backgrounds. The effectiveness and efficiency of M2cgAN are validated through comprehensive experiments on the widely used MSTAR and OpenSARShip benchmarks.
暂无评论