The rapid evolution of IIoT (Industrial Internet of Things) in computing has brought about numerous security concerns, among which is the looming threat of False Data Injection (FDI) attacks. To address these attacks,...
详细信息
ISBN:
(纸本)9798350391558;9798350379990
The rapid evolution of IIoT (Industrial Internet of Things) in computing has brought about numerous security concerns, among which is the looming threat of False Data Injection (FDI) attacks. To address these attacks, a study introduces a novel approach called MLBT-FDIA-IIoT (Fault Data Injection Attack Detection in IIoT using parallel Physics-Informed Neural networks with Giza Pyramid Construction Optimization algorithm). This method makes use of real-time sensor data for attack detection. The data is preprocessed using Distributed Set-Membership Fusion Filtering (DSMFF) to remove noise. Then, it is fed into a neural network for classification. Specifically, parallel Physics-Informed Neural networks (PPINN) are used to distinguish between normal operations and False Data Injection Attacks (FDIAs). However, PPINN lacks optimization methods for accurate detection. To address this, the study proposes the Giza Pyramid Construction Optimization algorithm (GPCOA). This algorithm optimizes the PPINN classifier to detect attacks with more precision. The proposed MLBT-FDIA-IIoT method is implemented using MATLAB and evaluates various metrics such as accuracy, recall, and precision. The results demonstrate significant improvements compared to existing techniques such as MLT-FDI-IIoT, FDIA-FDAS-IIoT, and DCDD-IIoT-FDIA.
作者:
Liu, KangkangChen, NingjiangGuangxi Univ
Coll Comp & Elect Informat Nanning Peoples R China Guangxi Univ
Educ Dept Guangxi Zhuang Autonomous Reg Key Lab Parallel Distributed & Intelligent Comp Nanning Peoples R China
With the increasing performance of deep convolutional neural networks, they have been widely used in many computer vision tasks. However, a huge convolutional neural network model requires a lot of memory and computin...
详细信息
ISBN:
(纸本)9798350349184;9798350349191
With the increasing performance of deep convolutional neural networks, they have been widely used in many computer vision tasks. However, a huge convolutional neural network model requires a lot of memory and computing resources, which makes it difficult to meet the requirements of low latency and reliability of edge computing when the model is deployed locally on resource-limited devices in edge environments. Quantization is a kind of model compression technology, which can effectively reduce model size, calculation cost and inference delay, but the quantization noise will cause the accuracy of the quantization model to decrease. Aiming at the problem of precision loss caused by model quantization, this paper proposes a post-training quantization method based on scale optimization. By reducing the influence of redundant parameters in the model on the quantization parameters in the process of model quantization, the scale factor optimization is realized to reduce the quantization error and thus improve the accuracy of the quantized model, reduce the inference delay and improve the reliability of edge applications. The experimental results show that under different quantization strategies and different quantization bit widths, the proposed method can improve the accuracy of the quantized model, and the absolute accuracy of the optimal quantization model is improved by 1.36%. The improvement effect is obvious, which is conducive to the application of deep neural network in edge environment.
The active millimeter-wave scanner plays an increasingly pivotal role in public safety by employing a non-contact method to detect contraband concealed beneath human clothing. However, millimeter-wave images encounter...
详细信息
Containers are widely deployed in clouds. There are two common container architectures: operating system-level (OS-level) container and virtual machine-level (VM-level) container. Typical examples are runc and Kata. I...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
Containers are widely deployed in clouds. There are two common container architectures: operating system-level (OS-level) container and virtual machine-level (VM-level) container. Typical examples are runc and Kata. It is well known that VM-level containers provide better isolation than OS-level containers, but at a higher overhead. Although there are quantitative analyses of the performance gap between these two container architectures, they rarely discuss the performance gap under the constrained resources provisioned to containers. Since the high-density deployment of containers is demanding in the cloud, each container is provisioned with limited resources specified by the cgroup mechanism. In this paper, we provide an in-depth analysis of the storage and network (two key aspects) performance differences between runc and Kata under varying resource constraints. We identify configuration implications that are crucial to performance and find that some of them are not exposed by the Kata interfaces. Based on that, we propose a profiling tool to automatically offer configuration suggestions for optimizing container performance. Our evaluation shows that the auto-generated configuration can improve the performance of MySQL by up to 107% in the TPCC benchmark compared with the default Kata setup.
The proceedings contain 53 papers. The topics discussed include: cloud-enabled blood bank management for an efficient healthcare system;a face forgery video detection model based on knowledge distillation;design of a ...
ISBN:
(纸本)9798350391954
The proceedings contain 53 papers. The topics discussed include: cloud-enabled blood bank management for an efficient healthcare system;a face forgery video detection model based on knowledge distillation;design of a sharing system based on privacy-preserving personal data;optimizing software evolution: navigating the landscape through concept location;DGBot: a DeGlobalizing graph transformer model for bot detection;radio number of the cartesian product of stars and middle graph of cycles;image denoising based on Swin transformer residual Conv U-Net;a social network analysis of user-organized community on digital music platform;differential game and simulation of supply chain joint promotion considering spillover effect;and analysis of subjective evaluation of ai speech synthesis emotional expressiveness.
The paper details an Ethereum blockchain platform for smart grid energy trading, employing smart contracts and security measures like access control. It separates front-end and back-end, supporting secure integration ...
详细信息
A scalable bandwidth-adaptive on-chip storage network architecture is proposed to address the severe data conflict and low bus parallelism in existing multi-level storage, Crossbar, and NoC architectures in edge accel...
详细信息
The proceedings contain 24 papers. The topics discussed include: fast VM replication on heterogeneous hypervisors for robust fault tolerance;sora: a latency sensitive approach for microservice soft resource adaptation...
ISBN:
(纸本)9798400701771
The proceedings contain 24 papers. The topics discussed include: fast VM replication on heterogeneous hypervisors for robust fault tolerance;sora: a latency sensitive approach for microservice soft resource adaptation;INSANE: a unified middleware for QoS-aware network acceleration in edge cloud computing;an end-to-end performance comparison of seven permissioned blockchain systems;BASALT: a rock-solid byzantine-tolerant peer sampling for very large decentralized networks;OrderlessChain: a CRDT-based BFT coordination-free blockchain without global order of transactions;characterizing distributed machine learning workloads on Apache spark;Pravega: a tiered storage system for data streams;bridging the gap of timing assumptions in byzantine consensus;and kernel-as-a-service: a serverless programming model for heterogeneous hardware accelerators.
The Internet of Things (IoT) is a breakthrough technology that interconnects and empowers numerous smart devices allowing them to communicate, collect, and exchange data. The most challenging issue in IoT networks is ...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
The Internet of Things (IoT) is a breakthrough technology that interconnects and empowers numerous smart devices allowing them to communicate, collect, and exchange data. The most challenging issue in IoT networks is securing exchanged data. Virtual Private networks (VPNs) have a significant impact on ensuring security within IoT systems. So implementing a VPN-based solution can bring multiple security values to IoT systems, especially for exchanged data. However, most VPN-based solutions rely on a single central server responsible for managing all VPN connections. This presents a major problem when the number of managed VPNs increases because it will lead to the performance deterioration of the VPN server and can even lead to a single point of failure problem. We propose in this paper a distributed VPN-based solution that relies on distributed fog nodes playing the role of VPN servers. To determine the VPN server responsible for a specific communication, a fully decentralized and efficient search is guaranteed within the fog nodes through the utilization of the DHT-based Chord protocol. To enforce security and immutability, blockchain is used to verify the different criteria of a new requester to join a specific VPN. The performance evaluations have shown that our solution is efficient in terms of cost, time, and complexity.
Subgraph isomorphism enumerates all embeddings in a data graph that are identical to a query graph. It is a well-known NP-hard problem widely used in various domains, such as bioinformatics, chem-informatics, and soci...
详细信息
ISBN:
(纸本)9798400717932
Subgraph isomorphism enumerates all embeddings in a data graph that are identical to a query graph. It is a well-known NP-hard problem widely used in various domains, such as bioinformatics, chem-informatics, and social network analysis. Recent works are focused on using GPUs for subgraph isomorphism. Due to the massive scale of intermediate results, current GPU implementations face challenges in scaling across multiple nodes due to high communication costs. The computational power of CPUs is not fully utilized in this process. We present a distributed framework for subgraph isomorphism that leverages CPU and GPU heterogeneous computing. It eliminates the intermediate results on GPU and significantly reduces communication overhead during the load-balancing process. The experiments indicate that our algorithm can be extended to multiple nodes with an almost linear efficiency improvement. Furthermore, our method also significantly outperforms other existing works on GPUs. It can reach an improvement of up to 21x compared to the state-of-the-art implementation CuTS in the distributed environment.
暂无评论