咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Robust Semi-Supervised Learnin... 收藏

Robust Semi-Supervised Learning by Wisely Leveraging Open-Set Data

作     者:Yang, Yang Jiang, Nan Xu, Yi Zhan, De-Chuan 

作者机构:Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Peoples R China Nanjing Univ Sch Artificial Intelligence Natl Key Lab Novel Software Technol Nanjing 210023 Peoples R China Dalian Univ Technol Sch Control Sci & Engn Dalian 116081 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 (IEEE Trans Pattern Anal Mach Intell)

年 卷 期:2024年第46卷第12期

页      面:8334-8347页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:NSFC [2022YFF0712100] Fundamental Research Funds for the Central Universities [62276131, 61921006, 08120002] Fundamental Research Funds for the Central University of China Collaborative Innovation Center of Novel Software Technology and Industrialization 

主  题:Data models Training Task analysis Semisupervised learning Biological system modeling Computational modeling Mathematical models Semi-supervised learning OOD detection open-set data 

摘      要:Open-set Semi-supervised Learning (OSSL) holds a realistic setting that unlabeled data may come from classes unseen in the labeled set, i.e., out-of-distribution (OOD) data, which could cause performance degradation in conventional SSL models. To handle this issue, except for the traditional in-distribution (ID) classifier, some existing OSSL approaches employ an extra OOD detection module to avoid the potential negative impact of the OOD data. Nevertheless, these approaches typically employ the entire set of open-set data during their training process, which may contain data unfriendly to the OSSL task that can negatively influence the model performance. This inspires us to develop a robust open-set data selection strategy for OSSL. Through a theoretical understanding from the perspective of learning theory, we propose Wise Open-set Semi-supervised Learning (WiseOpen), a generic OSSL framework that selectively leverages the open-set data for training the model. By applying a gradient-variance-based selection mechanism, WiseOpen exploits a friendly subset instead of the whole open-set dataset to enhance the model s capability of ID classification. Moreover, to reduce the computational expense, we also propose two practical variants of WiseOpen by adopting low-frequency update and loss-based selection respectively. Extensive experiments demonstrate the effectiveness of WiseOpen in comparison with the state-of-the-art.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分