Traditional transient angle stability analysis methods do not fully consider the spatial characteristics of the network topology and the temporal characteristics of the time-series ***,a data-driven method is proposed...
详细信息
Traditional transient angle stability analysis methods do not fully consider the spatial characteristics of the network topology and the temporal characteristics of the time-series ***,a data-driven method is proposed in this study,combining graph convolution network and long short-term memory network(GCN-LSTM)to analyze the transient power angle sta-bility by exploring the spatiotemporal disturbance char-acteristics of future power systems with high penetration of renewable energy sources(wind and solar energy)and power *** key time-series electrical state quantities are considered as the initial input feature quantities and normalized using the Z-score,whereas the network adjacency matrix is constructed according to the system network *** normalized feature quan-tities and network adjacency matrix were used as the inputs of the GCN to obtain the spatial features,reflecting changes in the network ***,the spa-tial features are inputted into the LSTM network to ob-tain the temporal features,reflecting dynamic changes in the transient power angle of the ***,the spatiotemporal features are fused through a fully con-nected network to analyze the transient power angle stability of future power systems,and the softmax activa-tion cross-entropy loss functions are used to predict the stability of the *** proposed transient power angle stability assessment method is tested on a 500 kV AC-DC practical power system,and the simulation results show that the proposed method could effectively mine the spatiotemporal disturbance characteristics of power sys-tems. Moreover, the proposed model has higher accuracy, higher recall rate, and shorter training and testing times than traditional transient power angle stability algo-rithms.
The paper presents a study of the effectiveness of software from the point of view of minimizing the energy consumption of microprocessor devices. In this case, the programming of the microcontroller in various progra...
详细信息
1 Nowadays, GPU platforms have gained wide importance in applications that require high processing power. Unfortunately, the advanced semiconductor technologies used for their manufacturing are prone to different typ...
1 Nowadays, GPU platforms have gained wide importance in applications that require high processing power. Unfortunately, the advanced semiconductor technologies used for their manufacturing are prone to different types of faults. Hence, solutions are required to support the exploration of the resilience to faults of different architectures. Based on this motivation, this work presents an environment dedicated to the analysis of the impact of permanent faults on GPU platforms. This environment is based on GPGPU-Sim, with the objective of exploiting the configuration features of this tool and, thus, analyzing the effects of faults when changing the target architecture. To validate the environment and show its usability, a fault campaign has been carried out where three different GPU architectures (Kepler, Volta, and Turing) were used. In addition, each GPU has been modified with an arbitrary number of parallel processing cores (or SMs). Three representative applications (Vector Add, Scalar Product, and Matrix Multiply) were executed on each GPU, and the behavior of each architecture in the presence of permanent faults in the functional (i.e., integer unit and floating-point) units was analyzed. This fault campaign shows the usability of the environment and demonstrates its potential use to support decisions on the best architectural parameters for a given application.
This work proposes a method to evaluate the effects of transition delay faults (TDFs) in GPUs. The method takes advantage of low-level (i.e., RT- and gate-level) descriptions of a GPU to evaluate the effects of transi...
详细信息
This work proposes a method to evaluate the effects of transition delay faults (TDFs) in GPUs. The method takes advantage of low-level (i.e., RT- and gate-level) descriptions of a GPU to evaluate the effects of transition delay faults in GPUs, thus paving the way to model them as errors at the instruction level, which can contribute to the resilience evaluations of large and complex applications. For this purpose, the paper describes a setup that efficiently simulates transition delay faults. The results allow us to compare their effects with stuck-at-faults (SAFs) and perform an error classification correlating these faults as instruction-level errors. We resort to an open-source model of a GPU (FlexGripPlus) and a set of workloads for the evaluation. The experimental results show that, according to the application code style, TDFs can compromise the operation of an application from 1.3 to 11.63 times less than SAFs. Moreover, for all the analyzed applications, a considerable percentage of sites of the Integer (5.4% to 51.7%), Floating-point (0.9% to 2.4%), and Special Function unit (17.0% to 35.6%) can become critical if affected by a SAF or TDF. Finally, a correlation between the fault's impact from both fault models and the instructions executed by the applications reveals that SAFs in the functional units are more prone (from 45.6% to 60.4%) to propagate errors at the software level for all units than TDFs (from 17.9% to 58.8%).
1 Neural Networks (NNs) are increasingly adopted in many domains. Given their considerable computational expenses required only during inference, it has been proposed that their architecture can be split into two sec...
详细信息
ISBN:
(数字)9798350365559
ISBN:
(纸本)9798350365566
1
Neural Networks (NNs) are increasingly adopted in many domains. Given their considerable computational expenses required only during inference, it has been proposed that their architecture can be split into two sections (Head and Tail) in a paradigm referred to as Split Computing (SC). The Head section executes the first part of a NN at a mobile device, while a Tail model calculates the missing part of the NN resorting to a server. SC allows a designer to find the optimal configuration, trading off computing load, performance, latency, and power. However, the adoption of SC approaches in safety-critical domains involves strong reliability concerns. This paper introduces an environment allowing to estimate the effects of permanent faults affecting the hardware supporting the execution of the head model, which is the most prone to faults. Results are reported for different NNs and different configurations, demonstrating the effectiveness of the method and paving the way towards the deployment of more resilient SC systems. Our results show that the considered split configurations of all models under test are up to 10% more critically vulnerable to faults than the corresponding original architecture.
The increasing transistor density and the data-intensive nature of modern applications impede the development of exhaustive validations and reliability estimations in feasible times. Traditional evaluation strategies ...
The increasing transistor density and the data-intensive nature of modern applications impede the development of exhaustive validations and reliability estimations in feasible times. Traditional evaluation strategies use simulation-based Fault Injection (FI), demanding considerable computational power. Thus, computation in High-Performance Computing (HPC) systems has become relevant. However, most FIs workloads follow sequential behaviors, impacting their effectiveness and efficiency in HPCs. This work proposes a method to analyze FI workloads and exploit HPCs. The technique uses container managers to boost the throughput and performance of sequential FI workloads used in dependability analyses. Our results demonstrate that Logic Simulation Workloads performance can improve up to 600 times.
Unmanned aerial vehicle technology is growing rapidly as it finds its application in the various industries, including military & defense, agriculture, logistics, transportation, healthcare, entertainment and many...
Unmanned aerial vehicle technology is growing rapidly as it finds its application in the various industries, including military & defense, agriculture, logistics, transportation, healthcare, entertainment and many others. One of the fastest growing industries including drones is the entertainment industry. More specifically, First Person View (FPV) piloting has become a popular sport which attracts huge masses. In FPV systems, the pilot wears FPV goggles and controls the drone using a controller, and the drone transmits video data to the goggles in real time. Since drones usually fly very quickly, the video quality in terms of resolution, compression artefacts and end-to-end delay must be maintained. This paper explores the idea of optimizing the Motion Estimation (ME) algorithm of existing High Efficiency Video Coding (HEVC) algorithms by utilizing user input from the controller. For example, if the drone is directed to hover to the left, then it would make sense to search for the most similar blocks in the previous video frame only on the left side of the referent block. For the purpose of this research, a HEVC video coder was customized to receive additional input besides the video data – the user input from the controller, or in other words, drone movement directions. We call this ME algorithm User Input Search (UIS). Our UIS algorithm is compared with the standard ME algorithms and its efficiency is tested.
The concept of 'smart', the concept of smart technology, and its main elements are considered. An analysis of the factors influencing the formation and development of the concept of smart technology is provide...
详细信息
Due to the lack of annotations in target bounding boxes, most methods for weakly supervised target detection transform the problem of object detection into a classification problem of candidate regions, making it easy...
详细信息
A recent trend dictating evolution of management and orchestration of computer networks is constituted by the softwarization and virtualization of them, which have drastically simplified the deployment and real-time r...
A recent trend dictating evolution of management and orchestration of computer networks is constituted by the softwarization and virtualization of them, which have drastically simplified the deployment and real-time reconfiguration of network functions, allowing them to continuously adapt and to deal with dynamic demands in an automated way. Alongside, recent management and orchestration approaches for softwarized networks employ Artificial Intelligence (AI) and Machine Learning (ML) to further reduce reaction time and improve the accuracy of decisions, where the network operations can be automated to the point of realizing autonomous driving networks. However, while automating operations can improve the overall system (it is acknowledged that 70% of network faults are caused by manual errors), AI/ML methods are not the panaceas, and we are still far from having a fully operating and efficient automated architecture. In this dissertation, we present a novel class of software network solutions that share the goal of enabling intelligent and autonomous computer networks, exploring how to exploit the power of AI/ML to handle the growing complexity of critical systems. We start with a new network management scheme for adaptive routing and autonomous scaling of virtual network resources. Then, acting on the hosts, we propose to adjust the TCP congestion control with a ML-based solution, whose goal is to select the proper congestion window learning from end-to-end features and (when available) network signals. We believe that the proposed solutions, and their combination, can lay the foundation for automated systems that better suit modern edge environments and cellular networks by providing unprecedented flexibility and adaptation to even unseen and unknown network conditions.
暂无评论