咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multi-view transfer learning w... 收藏

Multi-view transfer learning with privileged learning framework

与学习框架给予优惠学习的多看法转移

作     者:He, Yiwei Tian, Yingjie Liu, Dalian 

作者机构:Univ Chinese Acad Sci Sch Comp & Control Engn Beijing 100049 Peoples R China Univ Chinese Acad Sci Sch Econ & Management Beijing 100090 Peoples R China Chinese Aacdemy Sci Res Ctr Fictitious Econ & Data Sci Beijing 100190 Peoples R China Chinese Aacdemy Sci Key Lab Big Data Min & Knowledge Management Beijing 100190 Peoples R China Beijing Union Univ Dept Basic Course Teaching Beijing 100101 Peoples R China 

出 版 物:《NEUROCOMPUTING》 (神经计算)

年 卷 期:2019年第335卷

页      面:131-142页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [71731009, 61472390, 71331005, 91546201] Beijing Natural Science Foundation Premium Funding Project for Academic Human Resources Development in Beijing Union University 

主  题:Multi-view learning Transfer learning Learning using privileged information Support vector machine 

摘      要:In this paper, we present a multi-view transfer learning model named Multi-view Transfer Discriminative Model (MTDM) for both image and text classification tasks. Transfer learning, which aims to learn a robust classifier for the target domain using data from a different distribution, has been proved to be effective in many real-world applications. However, most of the existing transfer learning methods map across domain data into a high-dimension space which the distance between domains is closed. This strategy always fails in the multi-view scenario. On the contrary, the multi-view learning methods are also difficult to extend in the transfer learning settings. One of our goals in this paper is to develop a model which can perform better in both multi-view and transfer learning settings. On the one hand, the problem of multi-view is implemented by the paradigm of learning using privileged information (LUPI), which could guarantee the principle of complementary and consensus. On the other hand, the model adequately utilizes the source domain data to build a robust classifier for the target domain. We evaluate our model on both image and text classification tasks and show the effectiveness compared with other baseline approaches. (C) 2019 Elsevier B.V. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分