Several development methodologies (and abstractions) are available to simplify development of real-time and autonomous software systems. A widely adopted approach is MAS engineering for which several abstractions were...
详细信息
For the ad hoc network topology is changeable: The network scale becomes larger, which leads to difficulty in network management and lack of efficiency;The diversity of network services (text, pictures, audio, video, ...
详细信息
Nowadays, in the Artificial Intelligence (AI) community, developing an efficient artificial sensory system is important for efficient interaction between human-machine. The existing models for perceptual intelligence ...
详细信息
The paper presents a novel method to compute resonances in a power system network using a discrete state-space approach based on bounce diagrams. The approach models the distributed parameter line as a lattice diagram...
详细信息
The paper presents a novel method to compute resonances in a power system network using a discrete state-space approach based on bounce diagrams. The approach models the distributed parameter line as a lattice diagram, considering the reflection/transmission of traveling waves. This model is then represented as a discrete-time linear time-invariant system, enabling the computation of resonances in the network. Two applications are demonstrated: fault location and load placement to improve damping in black start scenario. The method is particularly useful to understand and control resonances in electric grids. Theoretical analysis and numerical examples illustrate the behavior of resonances as resistive loads are connected along the transmission line, providing insights for fault location and load allocation strategies. Copyright (c) 2024 The Authors.
Transformer-based large language models have recently shown remarkable performance, but their significantly large parameters require efficient training, which is commonly realized by utilizing both data- and model-par...
详细信息
ISBN:
(纸本)9798400704130
Transformer-based large language models have recently shown remarkable performance, but their significantly large parameters require efficient training, which is commonly realized by utilizing both data- and model-parallel deep learning on a GPU cluster. To minimize the training time, the optimal degrees of data and model parallelisms and the optimal model partitioning should be searched. When heterogeneous GPU clusters are used to utilize as many GPUs as possible, it becomes more challenging. In this work, we propose a framework named FASOP that automatically and rapidly finds the (near-)optimal degrees of parallelisms and model partitioning of Transformer-based models on heterogeneous GPU clusters, with an accurate estimation of pipelining latency and communications. Moreover, it can search for optimal cluster configurations that minimize the training time while satisfying the cost of GPU clusters. The proposed model partitioning algorithm in FASOP is three orders of magnitude faster than Dynamic Programming in the state-of-the-art for GPT-2 1.5B on a mixed set of 32 GPUs with A100 and A10, leading to a few seconds instead of several hours. And, FASOP shows only 8.7% mean absolute error in training time estimation for GPT-2 1.5B. With a fast yet accurate search, FASOP achieved up to 1.37x speedup compared to Megatron-LM.
Cell-free massive multi-input multi-output (MIMO) has recently attracted much attention, attributed to its potential to deliver uniform service quality. However, the adoption of a cell-free architecture raises concern...
详细信息
The article examines the issues of monitoring performance in microservice architectures. We explore the problem of forecasting performance indicators as well as fault propagation in such systems, which are distributed...
详细信息
ISBN:
(纸本)9783031653070;9783031653087
The article examines the issues of monitoring performance in microservice architectures. We explore the problem of forecasting performance indicators as well as fault propagation in such systems, which are distributed and have independent service deployments. The paper addresses these issues by proposing a novel approach that uses multimetric time series data to establish causal relationships between microservices and build graph neural networks based on revealed system dependencies. The method's goal is to proactively forecast performance indicators and fault propagation in order to assure the resilience and reliability of microservices. Various graph neural network architectures are discussed. The best one DCRNN uses a diffusion convolutional recurrent neural network in a basis and is able to predict well both on data with and without anomalies.
This research introduces a novel anomaly detection framework for IoT -based Smart Grid Cybersecurity systems. Leveraging autoencoders, LSTM networks, GANs, SOMs, and transfer learning, our approach achieves superior p...
详细信息
As networks continue to expand, they become increasingly heterogeneous, accommodating a diverse range of devices, from sensors and IoT to clients and servers. This ecosystem is highly distributed, with network facilit...
详细信息
This study explores the application of neural networks in optimizing design parameters for an all-optical NOT gate using photonic crystals. It focuses on the unique properties of photonic crystals, highlighting their ...
详细信息
暂无评论