Distributed tracing plays a vital role in microservice infrastructure, and learning-based trace analysis has been utilized to detect anomalies within such systems. However, existing approaches for learning-based trace...
详细信息
ISBN:
(纸本)9798350313062;9798350313079
Distributed tracing plays a vital role in microservice infrastructure, and learning-based trace analysis has been utilized to detect anomalies within such systems. However, existing approaches for learning-based trace-based anomaly detection face certain limitations. Some assume that trace patterns can be learned solely from normal executions, while others depend on anomaly injection to generate labeled traces categorized as normal or anomalous. However, in practical scenarios, anomalies may also happen during the normal execution. Moreover, a wide variety of anomalies may occur in practice, which cannot be captured solely through anomaly injection. To address these issues, we propose a Trace-Driven Anomaly Detection (TDAD) approach based on a Span Causal Graph (SCG) representation, which trains a model using a Graph Neural Network (GNN) and Positive and Unlabeled (PU) learning. This technique allows the model parameters to be optimized by estimating the underlying data distribution. As a result, TDAD can be effectively trained using a small number of labeled anomalous traces along with a relatively large number of unlabeled traces. Our evaluation reveals that TDAD outperforms not only the existing unsupervised trace-based anomaly detection methods by 11.9% in terms of F-1-score but also a supervised learning-based benchmark by 12x in terms of detection time.
Parallel programming models (e.g., OpenMP) are more and more used to improve the performance of real-timeapplications in modern processors. Nevertheless, these processors have complex architectures, being very diffic...
详细信息
ISBN:
(数字)9781728175683
ISBN:
(纸本)9781728175683
Parallel programming models (e.g., OpenMP) are more and more used to improve the performance of real-timeapplications in modern processors. Nevertheless, these processors have complex architectures, being very difficult to understand their timing behavior. The main challenge with most of existing works is that they apply static timing analysis for simpler models or measurement-based analysis using traditional platforms (e.g., single core) or considering only sequential algorithms. How to provide an efficient configuration for the allocation of the parallel program in the computing units of the processor is still an open challenge. This paper studies the problem of performing timing analysis on complex multi-core platforms, pointing out a methodology to understand the applications' timing behavior, and guide the configuration of the platform. As an example, the paper uses an OpenMP-based program of the Heat benchmark on a NVIDIA Jetson AGX Xavier. The main objectives are to analyze the execution time of OpenMP tasks, specify the best configuration of OpenMP directives, identify critical tasks, and discuss the predictability of the system/application. A Linux perf based measurement tool, which has been extended by our team, is applied to measure each task across multiple executions in terms of total CPU cycles, the number of cache accesses, and the number of cache misses at different cache levels, including L1, L2 and L3. The evaluation process is performed using the measurement of the performance metrics by our tool to study the predictability of the system/application.
Crowdsourcing of IoT devices is giving a new dimension to the Internet of things(IoT) applications. The crowd equipped with geolocated mobile devices enhances mobile crowd-sourcing which in turn contributes to new and...
详细信息
Ornamental plant primarily valued for their aesthetic value, fragrance or spatial shaping. They are not only making indoor and outdoor places visually more appealing, but also contribute to improved air quality, creat...
详细信息
This mainly focus on the rapid growth in the disciplines of data-driven system, concentrating on integration of Artificial Intelligence and the Internet of Thing. The chapter stars out with introducing the challenges ...
详细信息
作者:
Harikrishnan, P.R.Periyasamy, P.
Department of Computer Science Tamilnadu Tirupur India
Department of Computer Science and Applications Tamilnadu Tirupur India
As the ubiquity of Android devices continues to grow, the importance of safeguarding user data and privacy within the Android app ecosystem becomes increasingly critical. Central to this security framework is the perm...
详细信息
With recent advances in technology, many-core systems have become increasingly common in high-performance computingapplications, such as embeddedsystems and artificial intelligence. To fully utilize the processing p...
ODD, short for Operational Design Domain, is fundamental to automated driving technology RandD. A reasonable and well-defined ODD is the prerequisite for the realization of automated driving function safety. In this p...
详细信息
The main task of a social robot is to interact with humans through spoken natural language. It implies that it must be able to understand the intent of the user and the involved entities. Recently, different solutions...
详细信息
ISBN:
(纸本)9798350309799
The main task of a social robot is to interact with humans through spoken natural language. It implies that it must be able to understand the intent of the user and the involved entities. Recently, different solutions have been proposed to deal with the Natural Language Understanding (NLU) task. Extremely accurate results have been obtained by architectures based on transformers, but they require high computational resources to work in real-time. Unfortunately, these resources are not available on embeddedsystems equipped on board the robot. For these reasons, in this paper we experimentally evaluate the most promising transformers for NLU over the popular ATIS and SNIPS datasets and measured their inference time on the NVIDIA Jetson Xavier NX embedded system. The experimental analysis demonstrates that the Albert model can obtain comparable performance w.r.t. the popular BERT architecture (just a 2% drop on entity recognition), while gaining a speed-up of more than 3x. Thanks to the insights coming out from our analysis, we finally developed a real system for restaurant search running the model over a NVIDIA Jetson Xavier NX equipped on board of a social robot, obtaining a positive user feedback about its effectiveness and responsiveness.
The risk of head injuries. The methodology The increasing concern for safety in a variety of work and leisure environments mandates the production of robust and efficient helmet detection systems. This study offers a ...
详细信息
暂无评论