咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >An adversarial transformer for... 收藏

An adversarial transformer for anomalous lamb wave pattern detection

作     者:Guo, Jiawei Zhang, Sen Amiri, Nikta Yu, Lingyu Wang, Yi 

作者机构:Univ South Carolina Dept Mech Engn Columbia SC 29208 USA Univ South Carolina Dept Comp Sci & Engn Columbia SC 29208 USA 

出 版 物:《NEURAL NETWORKS》 (Neural Netw.)

年 卷 期:2025年第185卷

页      面:107153页

核心收录:

学科分类:1002[医学-临床医学] 1001[医学-基础医学(可授医学、理学学位)] 0812[工学-计算机科学与技术(可授工学、理学学位)] 10[医学] 

基  金:University of South Carolina US Department of Energy [DE-SC0025538, DE-NE0008959] 

主  题:Anomaly detection Unsupervised learning Deep learning Adversarial learning Transformer model Computer vision Time-series data analysis 

摘      要:Lamb waves are widely used for defect detection in structural health monitoring, and various methods are developed for Lamb wave data analysis. This paper presents an unsupervised Adversarial Transformer model for anomalous Lamb wave pattern detection by analyzing the spatiotemporal images generated by a hybrid PZTscanning laser Doppler vibrometer (SLDV). The model includes the global attention and the local attention mechanisms, and both are trained adversarially. Given the different natures between the normal and anomalous wave patterns, global attention allows accurate reconstruction of normal wave data but is less capable of reproducing anomalous data and, hence, can be used for anomalous wave pattern detection. Local attention, however, serves as a sparring partner in the proposed adversarial training process to boost the quality of global attention. In addition, a new segment replacement strategy is also proposed to make global attention consistently extract textural contents found in normal data, which, however, are noticeably different from anomalies, leading to superior model performance. Our Adversarial Transformer model is also compared with several benchmark models and demonstrates an overall accuracy of 97.1 % for anomalous wave pattern detection. It is also confirmed that global attention and local attention in adversarial training are responsible for the superior performance of our model over the benchmark models (including the native Transformer model).

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分