咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >PRE-TRAINING GRAPH CONTRASTIVE... 收藏
arXiv

PRE-TRAINING GRAPH CONTRASTIVE MASKED AU-TOENCODERS ARE STRONG DISTILLERS FOR EEG

作     者:Wei, Xinxu Zhao, Kanhao Jiao, Yong Carlisle, Nancy B. Xie, Hua Zhang, Yu 

作者机构:Department of Electrical and Computer Engineering Lehigh University BethlehemPA United States Department of Bioengineering Lehigh University BethlehemPA United States Department of Psychology Lehigh University BethlehemPA United States Center for Neuroscience Research Children’s National Hospital WashingtonDC United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Labeled data 

摘      要:Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this by framing it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled/labeled and high/low-density EEG data. To fully leverage the abundant unlabeled EEG data, we introduce a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates Graph Contrastive Pre-training and Graph Masked Autoencoder Pre-training. This approach synergistically combines contrastive and generative pre-training techniques by reconstructing contrastive samples and contrasting the reconstructions. For knowledge distillation from high-density to low-density EEG data, we propose a Graph Topology Distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data, effectively handling missing electrodes through contrastive distillation. To integrate transfer learning and distillation, we jointly pre-train the teacher and student models by contrasting their queries and keys during pre-training, enabling robust distillers for downstream tasks. We demonstrate the effectiveness of our method on four classification tasks across two clinical EEG datasets with abundant unlabeled data and limited labeled data. The experimental results show that our approach significantly outperforms contemporary methods in both efficiency and accuracy. © 2024, CC BY-NC-SA.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分