As industrial control network threats become increasingly complex, traditional intrusion detection systems (IDS) struggle to capture implicit relationships due to feature redundancy and intricate feature interactions....
详细信息
As industrial control network threats become increasingly complex, traditional intrusion detection systems (IDS) struggle to capture implicit relationships due to feature redundancy and intricate feature interactions. This leads to increased computational complexity and higher detection latency, making it difficult for industrial control systems(ICS) to meet the fast and accurate response requirements for security events. In this study, we propose a dynamic anomaly detection model for industrial control networks, named EfficientTransformer. The model uses a 1D Convolutional Neural Network (1D-CNN) to extract local features from the data, while a linear multi-head self-attention mechanism, replacing the traditional Transformer's multi-head attention mechanism, provides global learning capabilities. This reduces computational complexity and enables efficient parallel learning. Additionally, to address the issue of class imbalance, the model incorporates a weighted cross-entropy loss function that assigns higher weights to the minority class of abnormal traffic, thereby improving the model's anomaly detection ability. This innovation further mitigates issues of feature redundancy and complex feature interactions, enhancing the model's dynamic processing capability and accuracy. The method was validated on the Oil and Gas Gathering and Transportation Full-Process Industrial Platform Attack-Defense Field, and the Catalytic Reforming Unit Process Platform at the Key Laboratory of Information Security for the Petrochemical Industry in Liaoning Province. Experimental results show that the proposed EfficientTransformer improves accuracy by 1.01% and 2.26% compared to the standard Transformer on the two datasets and significantly reduces testing time, demonstrating its applicability in the field of industrial information security.
This paper provides an in-depth evaluation of three state-of-the-art Large Language Models (LLMs) for personalized career mentoring in the computing field, using three distinct student profiles that consider gender, r...
详细信息
In recent years, research and technology advancements have driven exponential growth in the adoption of Artificial intelligence (AI)-based systems, even in safety-critical contexts such as autonomous driving and healt...
详细信息
In the field of medical image analysis, accurate classification of images is crucial for diagnosing diseases and formulating treatment plans. Many studies have shown that global features and local features help reduce...
详细信息
Stress is a widespread phenomenon in our modern society. Each individual experiences stress differently, and a person’s prolonged exposure to it may seriously affect their health. It is therefore essential to identif...
详细信息
The process of breaking up a digital image into many parts is called segmentation. These sections in scanned papers, are those that have backgrounds, texts, and images. In applications linked to document analysis, tex...
详细信息
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the sc...
详细信息
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e., "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.
暂无评论