咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Contrastive autoencoder for an... 收藏

Contrastive autoencoder for anomaly detection in multivariate time series

作     者:Zhou, Hao Yu, Ke Zhang, Xuan Wu, Guanlin Yazidi, Anis 

作者机构:Beijing Univ Posts & Telecommun Beijing Peoples R China Norwegian Res Ctr AS Grimstad Norway Oslo Metropolitan Univ Oslo Norway Norwegian Univ Sci & Technol Trondheim Norway Oslo Univ Hosp Oslo Norway 

出 版 物:《INFORMATION SCIENCES》 (信息科学)

年 卷 期:2022年第610卷

页      面:266-280页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [61601046, 61171098] 111 Project of China [B08004] EEA BUPT Excellent Ph.D.. Students Foundation [CX2022149] Norway Grants 2014-2021 [EEA-RO-NO-2018-04] 

主  题:Anomaly detection Multivariate time series Autoencoder Contrastive learning Data augmentation 

摘      要:With the proliferation of the Internet of Things, a large amount of multivariate time series (MTS) data is being produced daily by industrial systems, corresponding in many cases to life-critical tasks. The recent anomaly detection researches focus on using deep learning methods to construct a normal profile for MTS. However, without proper constraints, these methods cannot capture the dependencies and dynamics of MTS and thus fail to model the normal pattern, resulting in unsatisfactory performance. This paper proposes CAE-AD, a novel contrastive autoencoder for anomaly detection in MTS, by introducing multi -grained contrasting methods to extract normal data pattern. First, to capture the temporal dependency of series, a projection layer is employed and a novel contextual contrasting method is applied to learn the robust temporal representation. Second, the projected series is transformed into two different views by using time-domain and frequency-domain data augmentation. Last, an instance contrasting method is proposed to learn local invariant characteristics. The experimental results show that CAE-AD achieves an F1-score ranging from 0.9119 to 0.9376 on the three public datasets, outperforming the baseline methods.(c) 2022 Published by Elsevier Inc.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分