版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Chinese Acad Sci Sch Comp & Control Engn Beijing 100049 Peoples R China Univ Chinese Acad Sci Sch Econ & Management Beijing 100090 Peoples R China Chinese Aacdemy Sci Res Ctr Fictitious Econ & Data Sci Beijing 100190 Peoples R China Chinese Aacdemy Sci Key Lab Big Data Min & Knowledge Management Beijing 100190 Peoples R China Beijing Union Univ Dept Basic Course Teaching Beijing 100101 Peoples R China
出 版 物:《NEUROCOMPUTING》 (神经计算)
年 卷 期:2019年第335卷
页 面:131-142页
核心收录:
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [71731009, 61472390, 71331005, 91546201] Beijing Natural Science Foundation Premium Funding Project for Academic Human Resources Development in Beijing Union University
主 题:Multi-view learning Transfer learning Learning using privileged information Support vector machine
摘 要:In this paper, we present a multi-view transfer learning model named Multi-view Transfer Discriminative Model (MTDM) for both image and text classification tasks. Transfer learning, which aims to learn a robust classifier for the target domain using data from a different distribution, has been proved to be effective in many real-world applications. However, most of the existing transfer learning methods map across domain data into a high-dimension space which the distance between domains is closed. This strategy always fails in the multi-view scenario. On the contrary, the multi-view learning methods are also difficult to extend in the transfer learning settings. One of our goals in this paper is to develop a model which can perform better in both multi-view and transfer learning settings. On the one hand, the problem of multi-view is implemented by the paradigm of learning using privileged information (LUPI), which could guarantee the principle of complementary and consensus. On the other hand, the model adequately utilizes the source domain data to build a robust classifier for the target domain. We evaluate our model on both image and text classification tasks and show the effectiveness compared with other baseline approaches. (C) 2019 Elsevier B.V. All rights reserved.