咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Developing parallel sequential... 收藏

Developing parallel sequential minimal optimization for fast training support vector machine

为快训练支持向量 machine() 开发平行顺序的最小的优化

作     者:Cao, L. J. Keerthi, S. S. Ong, C. J. Uvaraj, P. Fu, X. J. Lee, H. P. 

作者机构:Fudan Univ Shanghai 200433 Peoples R China Natl Univ Singapore Dept Mech Engn Singapore 119260 Singapore Inst High Performance Comp Singapore 117528 Singapore 

出 版 物:《NEUROCOMPUTING》 (神经计算)

年 卷 期:2006年第70卷第1-3期

页      面:93-104页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Research Fund, (70501008) Shanghai Pujiang program 

主  题:support vector machine (SVM) sequential minimal optimization (SMO) message passing interface (MPI) parallel algorithm 

摘      要:A parallel version of sequential minimal optimization (SMO) is developed in this paper for fast training support vector machine (SVM). Up to now, SMO is one popular algorithm for training SVM, but it still requires a large amount of computation time for solving large size problems. The parallel SMO is developed based on message passing interface (MPI). Unlike the sequential SMO which handle all the training data points using one CPU processor, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set, the MNIST data set and IDEVAL data set when many processors are used. There are also satisfactory results on the Web data set. This work is very useful for the research where multiple CPU processors machine is available. (c) 2006 Elsevier B.V. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分