Online estimation of product quality is a complicated task in refining processes. Data driven soft sensors have been successfully employed as a supplement to the online hardware analyzers that are often expensive and ...
详细信息
Online estimation of product quality is a complicated task in refining processes. Data driven soft sensors have been successfully employed as a supplement to the online hardware analyzers that are often expensive and require high maintenance. Support Vector Regression (SVR) is an efficient machine learning technique that can be used for soft sensor design. However, choosing optimal hyper-parameter values for the SVR is a hard optimization problem. In order to determine the parameters as fast and accurate as possible, some Hybrid Meta-Heuristic (HMH) algorithms have been developed in this study. A comprehensive study has been carried out comparing the meta-heuristic algorithms of GA and PSO to the HMH algorithms of GA-SQP and PSO-SQP for prediction of sulfur quality in treated gas oil using the SVR technique. Experimental data from a hydrodesulfurization (HDS) setup were collected to validate the proposed SVR model. The SVR model yields better performances both in accuracy and computation time (CT) for predicting the sulfur quality with hyperparameters optimized by HMH algorithms. Applying the PSO-SQP algorithm gives the best performance with AARE = 0.133 and CT = 15.88 s compared to the other methods. (C) 2014 Taiwan Institute of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Gaussian process(GP)has fewer parameters,simple model and output of probabilistic sense,when compared with the methods such as support vector *** of the hyper-parameters is critical to the performance of Gaussian proc...
详细信息
Gaussian process(GP)has fewer parameters,simple model and output of probabilistic sense,when compared with the methods such as support vector *** of the hyper-parameters is critical to the performance of Gaussian process ***,the common-used algorithm has the disadvantages of difficult determination of iteration steps,over-dependence of optimization effect on initial values,and easily falling into local *** solve this problem,a method combining the Gaussian process with memetic algorithm was *** on this method,memetic algorithm was used to search the optimal hyperparameters of Gaussian process regression(GPR)model in the training process and form MA-GPR algorithms,and then the model was used to predict and test the *** used in the marine long-range precision strike system(LPSS)battle effectiveness evaluation,the proposed MA-GPR model significantly improved the prediction accuracy,compared with the conjugate gradient method and the genetic algorithm optimization process.
Many data mining applications involve the task of building a model for predictive classification. The goal of this model is to classify data instances into classes or categories of the same type. The use of variables ...
详细信息
Many data mining applications involve the task of building a model for predictive classification. The goal of this model is to classify data instances into classes or categories of the same type. The use of variables not related to the classes can reduce the accuracy and reliability of classification or prediction model. Superfluous variables can also increase the costs of building a model particularly on large datasets. The feature selection and hyper-parameters optimization problem can be solved by either an exhaustive search over all parameter values or an optimization procedure that explores only a finite subset of the possible values. The objective of this research is to simultaneously optimize the hyper-parameters and feature subset without degrading the generalization performances of the induction algorithm. We present a global optimization approach based on the use of Cross-Entropy Method to solve this kind of problem. (C) 2012 Elsevier Ltd. All rights reserved.
Support Vector Machines (SVMs) concern a new generation learning systems based on recent advances in statistical learning theory. A key problem of these methods is how to choose all optimal kernel and how to optimise ...
详细信息
ISBN:
(纸本)9783540874768
Support Vector Machines (SVMs) concern a new generation learning systems based on recent advances in statistical learning theory. A key problem of these methods is how to choose all optimal kernel and how to optimise its parameters. A (multiple) kernel adapted to the problem to be solved could improve the SVM performance. Therefore, our goal is to develop a model able to automatically generate a complex kernel combination (linear or non-linear, weighted or un-weighted, according to the data) and to optimise both the kernel parameter and SVM parameters by evolutionary means in a unified framework. Furthermore we try to analyse the architecture of such kernel of kernels (KoK). Numerical experiments show that the SVM algorithm, involving the evolutionary KoK performs statistically better than some well-known classic kernels and its architecture is adapted to each problem.
暂无评论