Islanding detection has become an antiquated topic in microgrid, but during the shabby network condition, the conventional islanding detection methodologies become impractical. This article develops a new islanding de...
详细信息
Islanding detection has become an antiquated topic in microgrid, but during the shabby network condition, the conventional islanding detection methodologies become impractical. This article develops a new islanding detection technique considering interrupted communication during islanding detection. This article has experimented a new predictive model (PM)-based method for islanding detection in the laboratory-based multiple distributed generation (DG) system (system with more than one DG). Microphasor measurement units ($\mu $PMUs) are incorporated for measuring phasors from the real-time system. The measurements are analyzed by the artificial neuralnetwork (ANN) for initiating PM, which specifies the occurrence of islanding in terms of probability. The following two subalgorithms: evaluation and decoding are simultaneously working to estimate probability of the islanding during sound network condition. The posterior probability evaluation comes into action when communication is interrupted. The accuracy of the algorithm is tested under UL 1741 loading conditions. The performance of the algorithm is analyzed under the simultaneous manifestation of communication interruption and several islanding or nonislanding condition. Nondetection zone has been evaluated for different scenarios of missing frames. The algorithm has also been compared with existing methodologies.
Deep neuralnetworks (DNNs) have gained widespread adoption in diverse fields, including image classification, object detection, and natural language processing. However, training large-scale DNN models often encounte...
详细信息
Deep neuralnetworks (DNNs) have gained widespread adoption in diverse fields, including image classification, object detection, and natural language processing. However, training large-scale DNN models often encounters significant memory bottlenecks, which ask for efficient management of extensive tensors. Heterogeneous memory system, which combines persistent memory (PM) modules with traditional DRAM, offers an economically viable solution to address tensor management challenges during DNN training. However, existing memory management methods on heterogeneous memory systems often lead to low PM access efficiency, low bandwidth utilization, and incomplete analysis of model characteristics. To overcome these hurdles, we introduce an efficient tensor management approach, DeepTM, tailored for heterogeneous memory to alleviate memory bottlenecks during DNN training. DeepTM employs page-level tensor aggregation to enhance PM read and write performance and executes contiguous page migration to increase memory bandwidth. Through an analysis of tensor access patterns and model characteristics, we quantify the overall performance and transform the performance optimization problem into the framework of Integer Linear Programming. Additionally, we achieve tensor heat recognition by dynamically adjusting the weights of four key tensor characteristics and develop a global optimization strategy using Deep Reinforcement Learning. To validate the efficacy of our approach, we implement and evaluate DeepTM, utilizing the TensorFlow framework running on a PM-based heterogeneous memory system. The experimental results demonstrate that DeepTM achieves performance improvements of up to 36% and 49% compared to the current state-of-the-art memory management strategies AutoTM and Sentinel, respectively. Furthermore, our solution reduces the overhead by 18 times and achieves up to 29% cost reduction compared to AutoTM.
In the current digital era, distributed Denial of Service (DDoS) attacks can be recognized as a prevalent and perilous network threat, posing substantial risks to both organizations and individuals. Consequently, the ...
详细信息
distributed deep neuralnetwork (DNN) training is important to support artificial intelligence (AI) applications, such as image classification, natural language processing, and autonomous driving. Unfortunately, the d...
详细信息
ISBN:
(纸本)9798350342918
distributed deep neuralnetwork (DNN) training is important to support artificial intelligence (AI) applications, such as image classification, natural language processing, and autonomous driving. Unfortunately, the distributed property makes the DNN training vulnerable to system failures. Check-pointing is generally used to support failure tolerance, which however suffers from high runtime overheads. In order to enable high-performance and low-latency checkpointing, we propose a lightweight checkpointing system for distributed DNN training, called LightCheck. To reduce the checkpointing overheads, we leverage fine-grained asynchronous checkpointing by pipelining checkpointing in a layer-wise way. To further decrease the checkpointing latency, we leverage the software-hardware codesign methodology by coalescing new hardware devices into our checkpointing system via a persistent memory (PM) manager. Experimental results on six representative real-world DNN models demonstrate that LightCheck offers more than 10x higher checkpointing frequency with lower runtime overheads than stateof-the-art checkpointing schemes. We have released the opensource codes for public use in https://***/LighT-chenml/ ***.
Convolutional neuralnetwork (CNN) and Transformer architectures have been extensively applied in the domain of remote sensing image super-resolution. However, to achieve optimal performance, many existing methods are...
详细信息
In the realm of cybersecurity for cyber-physical systems (CPS), the need for robust and efficient intrusion detection mechanisms has become increasingly critical. The existing CPS had limited processing capabilities o...
详细信息
Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates...
详细信息
ISBN:
(纸本)9781713871088
Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates with compressed communication, a lower bound is still missing. Analyses of algorithms with communication compression have identified two abstract properties that guarantee convergence: the unbiased property or the contractive property. They can be applied either unidirectionally (compressing messages from worker to server) or bidirectionally. In the smooth and non-convex stochastic regime, this paper establishes a lower bound for distributed algorithms whether using unbiased or contractive compressors in unidirection or bidirection. To close the gap between this lower bound and the best existing upper bound, we further propose an algorithm, NEOLITHIC, that almost reaches our lower bound (except for a logarithm factor) under mild conditions. Our results also show that using contractive compressors in bidirection can yield iterative methods that converge as fast as those using unbiased compressors unidirectionally. We report experimental results that validate our findings.
Anomaly detection is a critical aspect of industrial production processes. Most of the self-supervised training system utilise Convolutional neuralnetworks (CNNs) for feature extraction. However, these methods often ...
详细信息
The Internet of Things (IoT) revolutionizes technology interaction by enabling seamless connectivity and automation, but it introduces significant security challenges. A distributed ledger technology (DLT) called IOTA...
详细信息
Controlling the temperature of molten steel in ladle furnace (LF)-refining process is one of the main tasks to ensure that the steelmaking-continuous casting process runs smoothly. In this work, a hybrid model based o...
详细信息
Controlling the temperature of molten steel in ladle furnace (LF)-refining process is one of the main tasks to ensure that the steelmaking-continuous casting process runs smoothly. In this work, a hybrid model based on metallurgical mechanism, isolation forest (IF), zero-phase component analysis whitening (ZCA whitening), and a deep neuralnetwork (DNN) was established to predict the temperature of molten steel in LF-refining process. The metallurgical mechanism, Pearson correlation coefficient, ZCA whitening, IF, and t-distributed stochastic neighbor embedding (t-SNE) were used to obtain the main factors affecting the temperature, analyze the correlation between two random variables, eliminate the correlation among the input variables, reduce the abnormal data of the original datasets, and visualize high-dimensional data, respectively. The single-machine-learning (ML) models, ZCA-ML models, and IF-ZCA-DNN model were comparatively examined by evaluating the coefficient of determination (R-2), root-mean-square error (RMSE), mean absolute error (MAE), and hit ratio. The optimal structure of IF-ZCA-DNN model had 4 hidden layers, 45 hidden layer neurons, a learning rate of 0.03, regularization coefficient of 2 x 10(-4), batch size of 128, leaky-rectified linear unit activation function, and an optimization algorithm of mini-batch stochastic gradient descent with momentum. The R-2, RMSE, and MAE of the IF-ZCA-DNN model were 0.916, 2.827, and 2.048, respectively. Meanwhile, the prediction hit ratio for the temperature of IF-ZCA-DNN model in the error ranges of [- 3, 3], [- 5, 5], and [- 10, 10] were 77.9, 92.3, and 99.6 pct, respectively. This study will be beneficial to realize precise control of temperature of molten steel in LF-refining process.
暂无评论