咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >MetaMask: Revisiting Dimension... 收藏
arXiv

MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

作     者:Li, Jiangmeng Qiang, Wenwen Zhang, Yanan Mo, Wenyi Zheng, Changwen Su, Bing Xiong, Hui 

作者机构:University of Chinese Academy of Sciences Institute of Software Chinese Academy of Sciences Southern Marine Science and Engineering Guangdong Laboratory Guangzhou China Gaoling School of Artificial Intelligence Renmin University of China China Institute of Software Chinese Academy of Sciences Southern Marine Science and Engineering Guangdong Laboratory Guangzhou China Gaoling School of Artificial Intelligence Renmin University of China Beijing Key Laboratory of Big Data Management and Analysis Methods China Thrust of Artificial Intelligence The Hong Kong University of Science and Technology Guangzhou China Guangzhou HKUST Fok Ying Tung Research Institute China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Redundancy 

摘      要:As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分