To date, quality-related multivariate statistical methods are extensively used in process monitoring and have achieved admirable effects. However, most of them contain recursive processes, which result in higher time ...
详细信息
To date, quality-related multivariate statistical methods are extensively used in process monitoring and have achieved admirable effects. However, most of them contain recursive processes, which result in higher time complexity and are not suitable for increasingly complex industrial processes. Therefore, this paper embeds singular value decomposition (SVD) into the kernel principal component regression (KPCR) to accomplish Quality-related process monitoring with a lower computational cost. Specifically, the kernel technique is devoted to map the original input into the higher dimensional space to boost the nonlinear ability of the principalcomponentregression (PCR), and then the KPCR is employed to capture the correlation between the input kernel matrix and the output matrix. At the same time, the kernelized input space is decomposed into two orthogonal quality-related and quality-unrelated spaces by SVD, and the statistics of the two spaces are calculated to detect the faults respectively. Compared with other multivariate statistical methods, it has the following advantages: 1) A quality-related kernelprincipalcomponent analysis (QR-KPCR) algorithm is proposed. 2) Compared with partial least squares method, the recursive process is omitted and the training time is shortened. 3) The model is more concise and the fault detection process is faster. 4) By contrast with other multivariate statistical process monitoring, it has a higher fault detection rate. Experimental results on a widespread example and an industry benchmark verify the effectiveness and reliability of the proposed method.
A batch-to-batch model-based iterative learning control (ILC) strategy for the end-point product quality control in batch processes is proposed in this paper. A nonlinear model for end-point product quality is develop...
详细信息
A batch-to-batch model-based iterative learning control (ILC) strategy for the end-point product quality control in batch processes is proposed in this paper. A nonlinear model for end-point product quality is developed from process operating data using kernel principal component regression (KPCR). The ILC algorithm is derived to calculate the control policy by linearizing the KPCR model around the nominal trajectories and minimising a quadratic objective function concerning the end-point product quality. To overcome the detrimental effects of unknown process variations or disturbances, it is proposed in the paper that the KPCR model should be updated in a batchwise manner by removing the earliest batch data from the training data set and adding the latest batch data to the training data set. The ILC based on updated KPCR model shows adaptability for process variations or disturbances when applied to a simulated batch polymerization process. Comparisons between KPCR model and principalcomponentregression (PCR) model based ILCs are also made in the simulations.
We present the design and the implementation of a kernel principal component regression software that handles training datasets with a million or more observations. kernelregressions are nonlinear and interpretable m...
详细信息
ISBN:
(纸本)9798400700569
We present the design and the implementation of a kernel principal component regression software that handles training datasets with a million or more observations. kernelregressions are nonlinear and interpretable models that have wide downstream applications, and are shown to have a close connection to deep learning. Nevertheless, the exact regression of large-scale kernel models using currently available software has been notoriously difficult because it is both compute and memory intensive and it requires extensive tuning of hyperparameters. While in computational science distributed computing and iterative methods have been a mainstay of large scale software, they have not been widely adopted in kernel learning. Our software leverages existing high performance computing (HPC) techniques and develops new ones that address cross-cutting constraints between HPC and learning algorithms. It integrates three major components: (a) a state-of-the-art parallel eigenvalue iterative solver, (b) a block matrix-vector multiplication routine that employs both multi-threading and distributed memory parallelism and can be performed on-the-fly under limited memory, and (c) a software pipeline consisting of Python front-ends that control the HPC backbone and the hyperparameter optimization through a boosting optimizer. We perform feasibility studies by running the entire ImageNet dataset and a large asset pricing dataset.
kernel principal component regression (KPCR) was studied by Rosipal et al. [18, 19, 20], Hoegaerts et al. [7], and Jade et al. [8]. However, KPCR still encounters theoretical difficulties in the procedure for construc...
详细信息
kernel function-based regression models were constructed and applied to a nonlinear hydrochemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using no...
详细信息
kernel function-based regression models were constructed and applied to a nonlinear hydrochemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernelregression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities.
The Gaussian mixture model (GMM) method is popular and efficient for voice conversion (VC), but it is often subject to overfitting. In this paper, the principalcomponentregression (PCR) method is adopted for the spe...
详细信息
The Gaussian mixture model (GMM) method is popular and efficient for voice conversion (VC), but it is often subject to overfitting. In this paper, the principalcomponentregression (PCR) method is adopted for the spectral mapping between source speech and target speech, and the numbers of principalcomponents are adjusted properly to prevent the overfitting. Then, in order to better model the nonlinear relationships between the source speech and target speech, the kernel principal component regression (KPCR) method is also proposed. Moreover, a KPCR combined with GMM method is further proposed to improve the accuracy of conversion. In addition, the discontinuity and oversmoothing problems of the traditional GMM method are also addressed. On the one hand, in order to solve the discontinuity problem, the adaptive median filter is adopted to smooth the posterior probabilities. On the other hand, the two mixture components with higher posterior probabilities for each frame are chosen for VC to reduce the oversmoothing problem. Finally, the objective and subjective experiments are carried out, and the results demonstrate that the proposed approach shows greatly better performance than the GMM method. In the objective tests, the proposed method shows lower cepstral distances and higher identification rates than the GMM method. While in the subjective tests, the proposed method obtains higher scores of preference and perceptual quality.
Based on the kernel principal component regression (KPCR) recently proposed in the literature, a new kernel auto-associator (KAA) model is proposed for classification and novelty detection. For face recognition proble...
详细信息
ISBN:
(纸本)0769525040
Based on the kernel principal component regression (KPCR) recently proposed in the literature, a new kernel auto-associator (KAA) model is proposed for classification and novelty detection. For face recognition problem, KAA model can efficiently characterize each subject thus offering a good recognition performance. Steming from the principalcomponentregression (PCR), a simple technique using principalcomponents as a subset selection method in regression, KPCR,selects a subset of the principalcomponents from the kernel space for the response variables to regress. As an extension of KPCR, a kernel autoassociator model can be built from autoregression by first extracting features in the kernel space and then performing ordinary least square reconstruction from the selected features. To demonstrate the performance of the proposed KAA model, face recognition is studied as a benchmark example, with a modular scheme which-is consist of two stages. The first stage is a preprocessing by a multilevel two-dimensional (2D) discrete wavelet transform. The second stage is a subject-specific KAA structure, which means each subject is assigned a KAA model for coding the corresponding visual information. Experiments on several well-known face datasets demonstrated high recognition accuracies by just using a few of the kernelprincipalcomponents.
暂无评论