Supply chain management and Hyperledger are two interconnected domains. They leverage blockchain technology to enhance efficiency, transparency, and security in supply chain operations. Together, they provide a decent...
详细信息
Data selection can be used in conjunction with adaptive filtering algorithms to avoid unnecessary weight updating and thereby reduce computational overhead. This paper presents a novel correntropy-based data selection...
详细信息
Gradient compression is a promising approach to alleviating the communication bottleneck in data parallel deep neural network (DNN) training by significantly reducing the data volume of gradients for synchronization. ...
详细信息
Gradient compression is a promising approach to alleviating the communication bottleneck in data parallel deep neural network (DNN) training by significantly reducing the data volume of gradients for synchronization. While gradient compression is being actively adopted by the industry (e.g., Facebook and AWS), our study reveals that there are two critical but often overlooked challenges: 1) inefficient coordination between compression and communication during gradient synchronization incurs substantial overheads, and 2) developing, optimizing, and integrating gradient compression algorithms into DNN systems imposes heavy burdens on DNN practitioners, and ad-hoc compression implementations often yield surprisingly poor system performance. In this paper, we propose a compression-aware gradient synchronization architecture, CaSync, which relies on flexible composition of basic computing and communication primitives. It is general and compatible with any gradient compression algorithms and gradient synchronization strategies and enables high-performance computation-communication pipelining. We further introduce a gradient compression toolkit, CompLL, to enable efficient development and automated integration of on-GPU compression algorithms into DNN systems with little programming burden. Lastly, we build a compression-aware DNN training framework HiPress with CaSync and CompLL. HiPress is open-sourced and runs on mainstream DNN systems such as MXNet, TensorFlow, and PyTorch. Evaluation via a 16-node cluster with 128 NVIDIA V100 GPUs and a 100 Gbps network shows that HiPress improves the training speed over current compression-enabled systems (e.g., BytePS-onebit, Ring-DGC and PyTorch-PowerSGD) by 9.8%-69.5% across six popular DNN models. IEEE
Controlling an active distribution network(ADN)from a single PCC has been advantageous for improving the performance of coordinated Intermittent RESs(IRESs).Recent studies have proposed a constant PQ regulation approa...
详细信息
Controlling an active distribution network(ADN)from a single PCC has been advantageous for improving the performance of coordinated Intermittent RESs(IRESs).Recent studies have proposed a constant PQ regulation approach at the PCC of ADNs using coordination of non-MPPT based ***,due to the intermittent nature of DGs coupled with PCC through uni-directional broadcast communication,the PCC becomes vulnerable to transient *** address this challenge,this study first presents a detailed mathematical model of an ADN from the perspective of PCC regulation to realize rigidness of PCC against ***,an H_(∞)controller is formulated and employed to achieve optimal performance against disturbances,consequently,ensuring the least oscillations during transients at ***,an eigenvalue analysis is presented to analyze convergence speed limitations of the newly derived system ***,simulation results show the proposed method offers superior performance as compared to the state-of-the-art methods.
The world of digitization is growing exponentially;data optimization, security of a network, and energy efficiency are becoming more prominent. The Internet of Things (IoT) is the core technology of modern society. Th...
详细信息
This work presents an essential module for the Transfer Learning approach's classification of melanoma skin lesions. Melanoma, a highly lethal form of skin cancer, poses a significant health threat globally. Image...
详细信息
Graph neural networks (GNNs) have gained increasing popularity, while usually suffering from unaffordable computations for real-world large-scale applications. Hence, pruning GNNs is of great need but largely unexplor...
详细信息
Graph neural networks (GNNs) have gained increasing popularity, while usually suffering from unaffordable computations for real-world large-scale applications. Hence, pruning GNNs is of great need but largely unexplored. The recent work Unified GNN Sparsification (UGS) studies lottery ticket learning for GNNs, aiming to find a subset of model parameters and graph structures that can best maintain the GNN performance. However, it is tailed for the transductive setting, failing to generalize to unseen graphs, which are common in inductive tasks like graph classification. In this work, we propose a simple and effective learning paradigm, Inductive Co-Pruning of GNNs (ICPG), to endow graph lottery tickets with inductive pruning capacity. To prune the input graphs, we design a predictive model to generate importance scores for each edge based on the input. To prune the model parameters, it views the weight’s magnitude as their importance scores. Then we design an iterative co-pruning strategy to trim the graph edges and GNN weights based on their importance scores. Although it might be strikingly simple, ICPG surpasses the existing pruning method and can be universally applicable in both inductive and transductive learning settings. On 10 graph-classification and two node-classification benchmarks, ICPG achieves the same performance level with 14.26%–43.12% sparsity for graphs and 48.80%–91.41% sparsity for the GNN model.
Smart power grids are vulnerable to security threats due to their cyber-physical nature. Existing data-driven detectors aim to address simple traditional false data injection attacks (FDIAs). However, adversarial fals...
详细信息
In recent years, tungsten disulfide(WS_(2)) and tungsten selenide(WSe_(2)) have emerged as favorable electrode materials because of their high theoretical capacity, large interlayer spacing, and high chemical activity...
详细信息
In recent years, tungsten disulfide(WS_(2)) and tungsten selenide(WSe_(2)) have emerged as favorable electrode materials because of their high theoretical capacity, large interlayer spacing, and high chemical activity;nevertheless, they have relatively low electronic conductivity and undergo large volume expansion during cycling, which greatly hinder them in practical applications. These drawbacks are addressed by combining a superior type of carbon material, graphene, with WS_(2) and WSe_(2) to form a WS_(2)/WSe_(2)@graphene *** materials have received considerable attention in electro-chemical energy storage applications such as lithium-ion batteries(LIBs), sodium-ion batteries(SIBs),and supercapacitors. Considering the rapidly growing research enthusiasm on this topic over the past several years, here the recent progress of WS_(2)/WSe_(2)@graphene nanocomposites in electrochemical energy storage applications is summarized. Furthermore, various methods for the synthesis of WS_(2)/WSe_(2)@graphene nanocomposites are reported and the relationships among these methods, nano/microstructures, and electrochemical performance are systematically summarized and discussed. In addition, the challenges and prospects for the future study and application of WS_(2)/WSe_(2)@graphene nanocomposites in electrochemical energy storage applications are proposed.
This article proposes a novel design approach for miniaturized, highly selective, self-packaged, and wide-stopband filtering slot antennas based on C- and T-type folded substrate integrated waveguide (C-/T-FSIW) cavit...
详细信息
暂无评论