作者:
Xu, TuSW Jiaotong Univ
Sch Informat Sci & Technol Chengdu 610031 Sichuan Peoples R China
Hyper-Sphere Multi-Class SVM (HSMC-SVM) is a kind of direct-model multi-class classifiers, and its training and testing speed are high. However, with the one-order norm soft-margin, classifying precision of HSMC-SVM i...
详细信息
ISBN:
(纸本)9780769536149
Hyper-Sphere Multi-Class SVM (HSMC-SVM) is a kind of direct-model multi-class classifiers, and its training and testing speed are high. However, with the one-order norm soft-margin, classifying precision of HSMC-SVM is affected. In order to improve the classifying precision, least square method is introduced in HSMC-SVM. As a result, a kind of new multi-class classifiers, Least Square Hyper-Sphere Multi-Class SVM (LSHS-MCSVM), is proposed. Simultaneously, the training algorithm and decision rules of LSHS-MCSVM are discussed too. Thus the classifying theory of LSHS-MCSVM is built completely. Shown in the numeric experiments, LSHS-MCSVM excels HSMC-SVM at both training speed and classifying precision. Hence, it is suitable for the situations with lots of classification categories and large scale of training samples.
Autonomous Underwater Vehicle (AUV) has already been applied to ocean resource observation, environmental discovery, underwater rescue and many other types of oceanic activities. To achieve better performance and mane...
详细信息
ISBN:
(纸本)9781538619186
Autonomous Underwater Vehicle (AUV) has already been applied to ocean resource observation, environmental discovery, underwater rescue and many other types of oceanic activities. To achieve better performance and maneuvering, thus to meet more complex task, the researchers and operators should open new avenues for deeper understanding of AUV dynamics, and on the basis of which, to design more efficient control algorithm. This paper applied support vector machine method, which is derived from machine learning technology, to AUV dynamic parameter identification, and verified the feasibility through simulations.
Support Vecto Machine has good generality. Its development for function regressing is not as same as that with fast speed for sample separated. Sequence Minimum Optimizing (smo) is effective on large samples, and is u...
详细信息
ISBN:
(纸本)9781424416738
Support Vecto Machine has good generality. Its development for function regressing is not as same as that with fast speed for sample separated. Sequence Minimum Optimizing (smo) is effective on large samples, and is used to handle the problems with sparse solutions. Considering the power of Rough Set (RS) for handling imprecise data, the datum boundary sought by RS will substitute original inputs as training subset. As the size of both training set and support vectors gained reduce, learning machine can be promoted and favor high quality solutions. Based on rough set and smo algorithm of regression, a hybrid algorithm (RS-smo-RA) is presented for function regressing. Only a simple and short module is need to makeup for differentiating boundary sample, and then algorithm RS-smo-RA can outperform common regression algorithm of smo. At last, experimental results are displayed with two approaches. There are evaluations of two algorithms implementing and testing.
Training a SVM corresponds to solving a linearly constrained quadratic problem (QP) in a number of variables equal to the number of data points, this optimization problem becoming challenging when the number of data p...
详细信息
ISBN:
(纸本)9783037851494
Training a SVM corresponds to solving a linearly constrained quadratic problem (QP) in a number of variables equal to the number of data points, this optimization problem becoming challenging when the number of data points exceeds few thousands. Because the computational complexity of the existing algorithms is extremely large in case of few thousands support vectors and therefore the SVM QP-problem becomes intractable, several decomposition algorithms that do not make assumptions on the expected number of support vectors have been proposed instead. In this paper we propose a heuristic learning algorithm of gradient type for learning a SVM using linear separable data, and analyze its performance in terms of accuracy and efficiency. In order to evaluate the efficiency of our learning method, several tests were performed against the Platt's smo method, and the conclusions are formulated in the final section of the paper.
Nonparametric kernel methods are widely used and proven to be successful in many statistical learning problems. Well-known examples include the kernel density estimate (KDE) for density estimation and the support vect...
详细信息
Nonparametric kernel methods are widely used and proven to be successful in many statistical learning problems. Well-known examples include the kernel density estimate (KDE) for density estimation and the support vector machine (SVM) for classification. We propose a kernel classifier that optimizes the L-2 or integrated squared error (ISE) of a "difference of densities." We focus on the Gaussian kernel, although the method applies to other kernels suitable for density estimation. Like a support vector machine (SVM), the classifier is sparse and results from solving a quadratic program. We provide statistical performance guarantees for the proposed L-2 kernel classifier in the form of a finite sample oracle inequality and strong consistency in the sense of both ISE and probability of error. A special case of our analysis applies to a previously introduced ISE-based method for kernel density estimation. For dimensionality greater than 15, the basic L-2 kernel classifier performs poorly in practice. Thus, we extend the method through the introduction of a natural regularization parameter, which allows it to remain competitive with the SVM in high dimensions. Simulation results for both synthetic and real-world data are presented.
In the traditional method of flatness pattern recognition known as neural network with a changing topological configuration, slow convergence and local minimum were observed. Moreover, the process of experimenting the...
详细信息
In the traditional method of flatness pattern recognition known as neural network with a changing topological configuration, slow convergence and local minimum were observed. Moreover, the process of experimenting the initial parameters and structure of the neural network according to the experience before has been proved time-consuming and complex. In this paper, a new approach was proposed based on the structural equivalence of radial basis function (RBF) network and Support Vector Machines (SVM). The smo algorithm was employed to obtain more optimal structure and initial parameters of RBF network, and then the BP algorithm was used to adjust RBF network slightly. The new approach with the advantages of SVM, such as fast learning and whole optimization, was efficient and intelligent.
暂无评论