Multivariate time series anomaly detection (MTAD) poses a challenge due to temporal and feature dependencies. The critical aspects of enhancing the detection performance lie in accurately capturing the dependencies be...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Multivariate time series anomaly detection (MTAD) poses a challenge due to temporal and feature dependencies. The critical aspects of enhancing the detection performance lie in accurately capturing the dependencies between variables within the sliding window and effectively leveraging them. Existing studies rely on domain knowledge to pre-set the window size, and overlook the strength of dependencies while calculating direction based on variable similarity. This paper proposes GSLTE, a graph structure learning method for MTAD. GSLTE employs Fast Fourier Transform to conduct iterative segmentation of the whole series, selecting the dominant Fourier frequency as the window size for each subsequence within the minimum interval. GSLTE quantifies the direction and strength of the dependencies based on variable-lag transfer entropy which is achieved through Dynamic Time Warping method to learn asymmetric links between variables. Extensive experiments show that GNN-based MTAD methods applying GSLTE can further improve anomaly detection performance while outperforming state-of-the-art competitors.
With the rapid growth of large language models, cloud computing has become an indispensable component of the AI industry. Cloud service providers(CSPs) are establishing AI data centers to service AI workloads. In the ...
详细信息
ISBN:
(数字)9798350387339
ISBN:
(纸本)9798350387346
With the rapid growth of large language models, cloud computing has become an indispensable component of the AI industry. Cloud service providers(CSPs) are establishing AI data centers to service AI workloads. In the face of this surging need for AI computing power, building a connected computing environment across various clouds and forming a JointCloud presents an attractive solution. However, scheduling AI tasks across multiple AI data centers within a JointCloud environment presents a significant challenge: how to balance users’ demands while ensuring CSPs’ fairness in scheduling. Existing research primarily focuses on optimizing scheduling quality with limited consideration for fairness. Therefore, this paper proposes a Fairness-Aware AI-Workloads Allocation method (F3A), a fair cross-cloud allocation technique for AI tasks. F3A utilizes Point and Token to reflect both the resource status and historical task allocations of AI data centers, enabling the consideration of users’ multidimensional demands and facilitating fair task allocation across multiple centers. In order to better assess the fairness of scheduling, we also devised a fairness indicator(FI), based on the Gini coefficient to measure the fairness of task allocation. The experimental results demonstrate that F3A consistently maintains FI within 0.1 across various cluster sizes and different task quantities, representing an improvement of 76.45% compared to classical fair scheduling algorithms round-robin. F3A exhibits commendable performance in ensuring fairness in task allocation while also demonstrating effectiveness in cost reduction and enhancing user satisfaction.
Serverless computing, comprised of Function as a Service (FaaS) and Backend as a Service (BaaS), has garnered widespread attention owning to its features such as maintenance-free operations, pay-per-use pricing, and a...
详细信息
ISBN:
(数字)9798350387339
ISBN:
(纸本)9798350387346
Serverless computing, comprised of Function as a Service (FaaS) and Backend as a Service (BaaS), has garnered widespread attention owning to its features such as maintenance-free operations, pay-per-use pricing, and automatic scalability. However, practical usage encounters several challenges: 1) The diversity of user applications makes comprehensive performance evaluation difficult, as benchmark and application tests only reflect performance under specific conditions and cannot fully capture users’ actual experiences across different serverless platforms. 2) Disparities in performance and costs across different serverless platforms make it challenging to achieve optimal performance and cost efficiency through single-cloud deployment, thereby underutilizing the advantages of each platform. 3) Vendor lock-in issues restrict the migration of user applications and exacerbate dependence on a single cloud *** address these challenges, this paper proposes a collaborative mechanism, referred to as DCSA, which integrates FaaS and storage services to achieve automatic cross-cloud deployment of user applications while considering both performance and cost comprehensively. Firstly, we adapt the interfaces of different serverless platforms, effectively reducing the complexity of cross-cloud deployment. Secondly, we develop cost and latency models for the cross-cloud deployment of chained serverless applications and propose a deployment scheduling algorithm that simultaneously considers both latency and cost. Finally, we conduct experiments to evaluate the performance of the proposed algorithm. Results demonstrate that our method can effectively reduce latency (up to 2.3%) and lower costs (up to 9.9%).
In CFD, mesh smoothing methods are commonly utilized to refine the mesh quality to achieve high-precision numerical simulations. Specifically, optimization-based smoothing is used for high-quality mesh smoothing, but ...
详细信息
Discourse structure analysis has shown to be useful for many artificial intelligence (AI) tasks such as text sum-marization and text categorization. However, for the Chinese news domain, the discourse structure analys...
详细信息
Discourse structure analysis has shown to be useful for many artificial intelligence (AI) tasks such as text sum-marization and text categorization. However, for the Chinese news domain, the discourse structure analysis system is still immature due to the limitation of the lack of expert-annotated datasets. In this paper, we present CNA, a Chinese news corpus containing 1155 news articles annotated by human experts, which covers four domains and four news media sources. Next, we implement several text classification methods as baselines. Experimental results demonstrate that document-level method can achieve a better performance, and we further propose a document-level neural network model with multiple sentence features which achieves the state-of-the-art performance. In the end, we analyze the content type distribution of each sentence in CNA and the prediction errors of our model that occurred on the test set. The codes and dataset will be open-sourced at https://***/gzl98/Chinese_Discourse_Profiling.
With the continuous improvement of the resolution of satellite remote sensing images and aerial remote sensing images, more and more useful data and information are obtained from remote sensing images. At the same tim...
详细信息
Deepfake technology induces substantial societal challenges, establishing deepfake detection as an important area of research. However, existing research mainly relies on target deepfake datasets, which limits its gen...
详细信息
ISBN:
(纸本)9798400718779
Deepfake technology induces substantial societal challenges, establishing deepfake detection as an important area of research. However, existing research mainly relies on target deepfake datasets, which limits its generalizability across out-of-distribution tasks to some extent. Also, it often emphasizes visual modalities while neglecting the complementary information of the auditory data. Their autoregressive-based strategies also introduce long-term information interference, further constraining the detection performance. Consequently, the potential to exploit complementary relations between visual and auditory modalities and to leverage strongly correlated short-range information remains underexplored for the detection task. To address these challenges, this paper introduces Self-BiSterm, a novel self-supervised learning framework for deepfake detection. First, we propose a bidirectional synchronization distribution modeling mechanism, which calculates inconsistent distributions for video-to-audio and audio-to-video scenarios. This mechanism effectively measures audio-visual inconsistencies, improving the model's generalization performance in practical applications. Second, to mitigate the issue of long-term information distortion, we develop a short-term temporal dependency module to estimate the adjacent local receptive fields. This module facilitates the estimation of subsequent distributions by capturing short-term temporal dependencies with high precision. The effectiveness of the proposed Self-BiSterm framework is validated on various benchmarks, demonstrating superior performance compared to existing methods.
In a Loss of Coolant Accident (LOCA), reactor core temperatures can rise rapidly, leading to potential fuel damage and radioactive material release. This research presents a groundbreaking method that combines the pow...
详细信息
Offline imitative learning(OIL) is often used to solve complex continuous decision-making tasks. For these tasks such as robot control, automatic driving and etc., it is either difficult to design an effective reward ...
详细信息
ISBN:
(纸本)9781450398336
Offline imitative learning(OIL) is often used to solve complex continuous decision-making tasks. For these tasks such as robot control, automatic driving and etc., it is either difficult to design an effective reward for learning or very expensive and time-consuming for agents to collect data interactively with the environment. However, the data used in previous OIL methods are all gathered by reinforcement learning algorithms guided by task-specific rewards, which is not a true reward-free premise and still suffers from the problem of designing an effective reward function in real tasks. To this end, we propose the reward-free exploratory data driven offline imitation learning (ExDOIL) framework. ExDOIL first trains an unsupervised reinforcement learning agent by interacting with the environment, and collects enough unsupervised exploration data during training; Then, a task independent yet simple and efficient reward function is used to relabel the collected data; Finally, an agent is trained to imitate the expert to complete the task through a conventional RL algorithm such as TD3. Extensive experiments on continuous control tasks demonstrate that the proposed framework can achieve better imitation performance(28% higher episode returns on average) comparing with previous SOTA method(ORIL) without any task-specific rewards.
With importance of data value is approved, data-sharing will create more and greater value has become consensus. However, data exchange has to use a trusted third party(TTP) as an intermediary in an untrusted network ...
详细信息
暂无评论