Direct imaging of exoplanets is crucial for advancing our understanding of planetary systems beyond our solar system, but it faces significant challenges due to the high contrast between host stars and their planets. ...
详细信息
We present progress towards realizing electronic-photonic quantum systems on-chip;particularly, entangled photon-pair sources, placing them in the context of previous work, and outlining our vision for mass-producible...
详细信息
In the design and planning of next-generation Internet of Things(IoT),telecommunication,and satellite communication systems,controller placement is crucial in software-defined networking(SDN).The programmability of th...
详细信息
In the design and planning of next-generation Internet of Things(IoT),telecommunication,and satellite communication systems,controller placement is crucial in software-defined networking(SDN).The programmability of the SDN controller is sophisticated for the centralized control system of the entire ***,it creates a significant loophole for the manifestation of a distributed denial of service(DDoS)attack ***,recently a Distributed Reflected Denial of Service(DRDoS)attack,an unusual DDoS attack,has been ***,minimal deliberation has given to this forthcoming single point of SDN infrastructure failure ***,recently the high frequencies of DDoS attacks have increased *** this paper,a smart algorithm for planning SDN smart backup controllers under DDoS attack scenarios has *** proposed smart algorithm can recommend single or multiple smart backup controllers in the event of DDoS *** obtained simulated results demonstrate that the validation of the proposed algorithm and the performance analysis achieved 99.99%accuracy in placing the smart backup controller under DDoS attacks within 0.125 to 46508.7 s in SDN.
The rapid increase in computer hardware capabilities demands the need for greater parallel efficiency. While the existing CPU parallelization techniques using OpenMP provide adequate performance on parallelizing inten...
详细信息
Feature-only partition of large graph data in distributed Graph Neural Network (GNN) training offers advantages over commonly adopted graph structure partition, such as minimal graph preprocessing cost and elimination...
详细信息
ISBN:
(数字)9798350383508
ISBN:
(纸本)9798350383515
Feature-only partition of large graph data in distributed Graph Neural Network (GNN) training offers advantages over commonly adopted graph structure partition, such as minimal graph preprocessing cost and elimination of cross-worker subgraph sampling burdens. Nonetheless, performance bottleneck of GNN training with feature-only partitions still largely lies in the substantial communication overhead due to cross-worker feature fetching. To reduce the communication overhead and expedite distributed training, we first investigate and answer two key questions on convergence behaviors of GNN model in feature-partition based distribute GNN training: 1) As no worker holds a complete copy of each feature, can gradient exchange among workers compensate for the information loss due to incomplete local features? 2) If the answer to the first question is negative, is feature fetching in every training iteration of the GNN model necessary to ensure model convergence? Based on our theoretical findings on these questions, we derive an optimal communication plan that decides the frequency for feature fetching during the training process, taking into account bandwidth levels among workers and striking a balance between model loss and training time. Extensive evaluation demonstrates consistent results with our theoretical analysis, and the effectiveness of our proposed design.
Gaussian processes (GPs) are commonly used for geospatial analysis, but they suffer from high computational complexity when dealing with massive data. For instance, the log-likelihood function required in estimating t...
Gaussian processes (GPs) are commonly used for geospatial analysis, but they suffer from high computational complexity when dealing with massive data. For instance, the log-likelihood function required in estimating the statistical model parameters for geospatial data is a computationally intensive procedure that involves computing the inverse of a covariance matrix with size $n$ × n, where $n$ represents the number of geographical locations in the simplest case. As a result, in the literature, studies have shifted towards approximation methods to handle larger values of $n$ effectively while maintaining high accuracy. These methods encompass a range of techniques, including low-rank and sparse approximations. Among these techniques, Vecchia approximation is one of the most promising methods to speed up evaluating the log-likelihood function. This study presents a parallel implementation of the Vecchia approximation technique, utilizing batched matrix computations on contemporary GPUs. The proposed implementation relies on batched linear algebra routines to efficiently execute individual conditional distributions in the Vecchia algorithm. We rely on the KBLAS linear algebra library to perform batched linear algebra operations, reducing the time to solution compared to the state-of-the-art parallel implementation of the likelihood estimation operation in the ExaGeoStat software by up to 700X, 833X, 1380X on 32GB GV100, 80GB A100, and 80GB H100 GPUs, respectively, with the largest matrix dimension that can fully fit into the GPU memory in the dense Maximum Likelihood Estimation (MLE) case. We also successfully manage larger problem sizes on a single NVIDIA GPU, accommodating up to 1 million locations with 80GB A100 and H100 GPUs while maintaining the necessary application accuracy. We further assess the accuracy performance of the implemented algorithm, identifying the optimal settings for the Vecchia approximation algorithm to preserve accuracy on two real geos
Consumer Internet of Things (IoT) networks have gained widespread popularity due to their convenience, automation, and security provisions in personal and home environments. Ubiquitous resource-constrained devices, ho...
详细信息
ISBN:
(数字)9798331508050
ISBN:
(纸本)9798331508067
Consumer Internet of Things (IoT) networks have gained widespread popularity due to their convenience, automation, and security provisions in personal and home environments. Ubiquitous resource-constrained devices, however, are plagued with security issues that often arise from firmware-related issues and their propagated effects. While various studies on firmware attestation are available, they require firmware copies, specific hardware, and complex computation on the IoT device. This paper presents a study on the application of Graph Transformer Networks (GTN) in verifying the firmware integrity of consumer IoT swarms using SRAM as an attestation feature. The proposed method achieves an overall 0.99 accuracy on authentic samples from development and physical twin networks, 0.99 on malware, and 0.97 on propagated misbehavior at a $\sim 10^{-4}$ second inference latency on a laptop CPU.
Unsupervised representation learning has been widely explored across various modalities, including neural architectures, where it plays a key role in downstream applications like Neural Architecture Search (NAS). Thes...
详细信息
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection,...
详细信息
Consumer Internet of Things (IoT) networks have gained widespread popularity due to their convenience, automation, and security provisions in personal and home environments. Ubiquitous resource-constrained devices, ho...
详细信息
暂无评论