Solving a kernelregression problem usually suffers from expensive computation and storage costs due to the large kernel size. To tackle this problem, the Nystrom method is proposed and widely applied to large-scale k...
详细信息
Solving a kernelregression problem usually suffers from expensive computation and storage costs due to the large kernel size. To tackle this problem, the Nystrom method is proposed and widely applied to large-scale kernel methods as an approximate solution. The key idea of this method is to select a subset of columns of the kernel matrix and rebuilds a low-rank approximation to the dense kernel matrix. To reduce computational costs of sparse kernel regression, we take the merits of the Nystrom approximation and present two non-uniform Nystrom methods with theoretical guarantees for sparse kernel regression in this paper. In detail, we first provide an upper bound to the solution of sparse kernel regression via Nystrom approximation. Based on this bound, we prove the upper bounds of the optimal solutions when adopting two notable non-uniform landmark selection strategies, including Determinantal Point Processes (DPPs) and Ridge Leverage Scores (RLS). Compared with the uniform Nystrom method, we empirically demonstrate the superior performance of non-uniform Nystrom in sparse kernel regression on a synthetic dataset and several real-world datasets. (c) 2022 Elsevier B.V. All rights reserved.
This paper integrates L1-norm structural risk minimization with L1-norm approximation error to develop a new optimization framework for solving the parameters of sparse kernel regression models, addressing the challen...
详细信息
This paper integrates L1-norm structural risk minimization with L1-norm approximation error to develop a new optimization framework for solving the parameters of sparse kernel regression models, addressing the challenges posed by complex model structures, over-fitting, and limited modeling accuracy in traditional nonlinear system modeling. The first L1-norm regulates the complexity of the model structure to maintain its sparsity, while another L1-norm is essential for ensuring modeling accuracy. In the optimization of support vector regression (SVR), the L2-norm structural risk is converted to an L1-norm framework through the condition of non-negative Lagrange multipliers. Furthermore, L1-norm optimization for modeling accuracy is attained by minimizing the maximum approximation error. The integrated L1-norm of structural risk and approximation errors creates a new, simplified optimization problem that is solved using linear programming (LP) instead of the more complex quadratic programming (QP). The proposed sparse kernel regression model has the following notable features: (1) it is solved through relatively simple LP;(2) it effectively balances the trade-off between model complexity and modeling accuracy;and (3) the solution is globally optimal rather than just locally optimal. In our three experiments, the sparsity metrics of SVs% were 2.67%, 1.40%, and 0.8%, with test RMSE values of 0.0667, 0.0701, 0.0614 (sinusoidal signal), and 0.0431 (step signal), respectively. This demonstrates the balance between sparsity and modeling accuracy.
The application of a robust learning technique is inevitable in the development of a self-cleansing sediment transport model. This study addresses this problem and advocates the use of sparse kernel regression (SKR) t...
详细信息
The application of a robust learning technique is inevitable in the development of a self-cleansing sediment transport model. This study addresses this problem and advocates the use of sparse kernel regression (SKR) technique to design a self-cleaning model. The SKR approach is a regression technique operating in the kernel space which also benefits from the desirable properties of a sparse solution. In order to develop a model applicable to a wide range of channel characteristics, five different experimental data sets from 14 different channels are utilized in this study. In this context, the efficacy of the SKR model is compared against the support vector regression (SVR) approach along with several other methods from the literature. According to the statistical analysis results, the SKR method is found to outperform the SVR and other regression equations. In particular, while empirical regression models fail to generate accurate results for other channel cross-section shapes and sizes, the SKR model provides promising results due to the inclusion of a channel parameter at the core of its structure and also by operating on an extensive range of experimental data. The superior efficacy of the SKR approach is also linked to its formulation in the kernel space while also benefiting from a sparse representation method to select the most useful training samples for model construction. As such, it also circumvents the requirement to evaluate irrelevant or noisy observations during the test phase of the model, and thus improving on the test phase running time.
A novel significant vector (SV) regression algorithm is proposed in this paper based on an analysis of Chen's orthogonal least squares (OLS) regression algorithm. The proposed regularized SV algorithm finds the si...
详细信息
A novel significant vector (SV) regression algorithm is proposed in this paper based on an analysis of Chen's orthogonal least squares (OLS) regression algorithm. The proposed regularized SV algorithm finds the significant vectors in a successive greedy process in which, compared to the classical OLS algorithm, the orthogonalization has been removed from the algorithm. The performance of the proposed algorithm is comparable to the OLS algorithm while it saves a lot of time complexities in implementing the orthogonalization needed in the OLS algorithm. (C) 2007 Elsevier Ltd. All rights reserved.
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling ...
详细信息
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, which places the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method is adopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance.
暂无评论