咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Convex ensemble learning with ... 收藏

Convex ensemble learning with sparsity and diversity

凸的整体与稀少和差异学习

作     者:Yin, Xu-Cheng Huang, Kaizhu Yang, Chun Hao, Hong-Wei 

作者机构:Univ Sci & Technol Beijing Sch Comp & Commun Engn Dept Comp Sci & Technol Beijing 100083 Peoples R China Univ Sci & Technol Beijing Sch Comp & Commun Engn Beijing Key Lab Mat Sci Knowledge Engn Beijing 100083 Peoples R China Xian Jiaotong Liverpool Univ Dept Elect & Elect Engn Suzhou 215123 Peoples R China Chinese Acad Sci Inst Automat Beijing 100190 Peoples R China 

出 版 物:《INFORMATION FUSION》 (信息融合)

年 卷 期:2014年第20卷第1期

页      面:49-59页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Basic Research Program of China [2012CB316301] National Natural Science Foundation of China [61105018, 61175020] R&D Special Fund for Public Welfare Industry (Meteorology) of China [GYHY201106039, GYHY201106047] 

主  题:Classifier ensemble Sparsity Diversity Convex quadratic programming 

摘      要:Classifier ensemble has been broadly studied in two prevalent directions, i.e., to diversely generate classifier components, and to sparsely combine multiple classifiers. While most current approaches are emphasized on either sparsity or diversity only, we investigate classifier ensemble focused on both in this paper. We formulate the classifier ensemble problem with the sparsity and diversity learning in a general mathematical framework, which proves beneficial for grouping classifiers. In particular, derived from the error-ambiguity decomposition, we design a convex ensemble diversity measure. Consequently, accuracy loss, sparseness regularization, and diversity measure can be balanced and combined in a convex quadratic programming problem. We prove that the final convex optimization leads to a closed-form solution, making it very appealing for real ensemble learning problems. We compare our proposed novel method with other conventional ensemble methods such as Bagging, least squares combination, sparsity learning, and AdaBoost, extensively on a variety of UCI benchmark data sets and the Pascal Large Scale Learning Challenge 2008 webspam data. Experimental results confirm that our approach has very promising performance. (C) 2013 Elsevier B.V. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分