咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >SkeletonMAE: Spatial-Temporal ... 收藏
arXiv

SkeletonMAE: Spatial-Temporal Masked Autoencoders for Self-supervised Skeleton Action Recognition

作     者:Wu, Wenhan Hua, Yilei Zheng, Ce Wu, Shiqian Chen, Chen Lu, Aidong 

作者机构:Department of Computer Science University of North Carolina at Charlotte United States School of Information Science and Engineering Wuhan University of Science and Technology China Center for Research in Computer Vision University of Central Florida United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Deep learning 

摘      要:Fully supervised skeleton-based action recognition has achieved great progress with the blooming of deep learning techniques. However, these methods require sufficient labeled data which is not easy to obtain. In contrast, self-supervised skeleton-based action recognition has attracted more attention. With utilizing the unlabeled data, more generalizable features can be learned to alleviate the overfitting problem and reduce the demand of massive labeled training data. Inspired by the MAE [15], we propose a spatial-temporal masked autoencoder framework for self-supervised 3D skeleton-based action recognition (SkeletonMAE). Following MAE’s masking and reconstruction pipeline, we utilize a skeleton based encoder-decoder transformer architecture to reconstruct the masked skeleton sequences. A novel masking strategy, named Spatial-Temporal Masking, is introduced in terms of both joint-level and frame-level for the skeleton sequence. This pre-training strategy makes the encoder output generalizable skeleton features with spatial and temporal dependencies. Given the unmasked skeleton sequence, the encoder is fine-tuned for the action recognition task. Extensive experiments show that our SkeletonMAE achieves remarkable performance and outperforms the state-of-the-art methods on both NTU RGB+D and NTU RGB+D 120 datasets. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分