In this paper, an acceleratedproximalgradient based forgetting factor recursive least squares (APG-FFRLS) algorithm is proposed for state of charge (SOC) estimation with output outliers. First, a second-order resist...
详细信息
In this paper, an acceleratedproximalgradient based forgetting factor recursive least squares (APG-FFRLS) algorithm is proposed for state of charge (SOC) estimation with output outliers. First, a second-order resistance-capacitance (RC) equivalent circuit model is built to reflect the operating characteristics of the battery. Then, the APG method is applied to correct the output outliers. The FFRLS and extended Kalman filtering (EKF) are used to estimate the battery model parameters and SOC interactively. In order to verify the effectiveness of the proposed algorithm, this paper models the Samsung lithium battery and compares the effectiveness of different algorithms in estimating SOC. The experimental results show that the proposed APG-FFRLS-EKF algorithm has higher accuracy.
In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion *** our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on projectio...
详细信息
In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion *** our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on projection on the Hankel tensor ***,due to the special properties of Hankel structure,using the fast singular value thresholding operator of the mode-s unfolding of a Hankel tensor can decrease the computational ***,the convergence of the new algorithm is discussed under some reasonable ***,the numerical experiments show the effectiveness of the proposed algorithm.
Precision medicine is an important area of research with the goal of identifying the optimal treatment for each individual patient. In the literature, various methods are proposed to divide the population into subgrou...
详细信息
Precision medicine is an important area of research with the goal of identifying the optimal treatment for each individual patient. In the literature, various methods are proposed to divide the population into subgroups according to the heterogeneous effects of individuals. In this article, a new exploratory machine learning tool, named latent supervised clustering, is proposed to identify the heterogeneous subpopulations. In particular, we formulate the problem as a regression problem with subject specific coefficients, and use adaptive fusion to cluster the coefficients into subpopulations. This method has two main advantages. First, it relies on little prior knowledge and weak parametric assumptions on the underlying subpopulation structure. Second, it makes use of the outcome-predictor relationship, and hence can have competitive estimation and prediction accuracy. To estimate the parameters, we design a highly efficient accelerated proximal gradient algorithm which guarantees convergence at a competitive rate. Numerical studies show that the proposed method has competitive estimation and prediction accuracy, and can also produce interpretable clustering results for the underlying heterogeneous *** this article are available online.
One-class support vector machine (OCSVM) is an important tool in machine learning and has been extensively used for one-class classification problems. The traditional OCSVM solves the primal problem by solving the dua...
详细信息
One-class support vector machine (OCSVM) is an important tool in machine learning and has been extensively used for one-class classification problems. The traditional OCSVM solves the primal problem by solving the dual problem, which is a quadratic programming problem. However, the computation of the quadratic programming is cubic and the storage complexity is quadratic with problem scale, so it is inefficient for training large-scale problems. In this paper, we propose to train OCSVM in primal space directly. Unfortunately, owing to the non-differentiability of hinge loss used in OCSVM, the OCSVM cannot be solved by the gradient-based optimization method which is first-order method that converges fast. On the other hand, the hinge loss is unbounded which makes the OCSVM less robust to outliers. The outliers will make the decision boundary severely deviate from the optimal hyperplane. To overcome the drawbacks, a huberized truncated loss function which is a nonconvex differentiable function is proposed to improve the robustness of the OCSVM. The huberized truncated loss function is insensitive to outliers as a substitute for hinge loss in traditional OCSVM. In contrast to traditional OCSVM, the primal objective function of robust OCSVM is differentiable. Considering the non-convexity of the optimization problem, we employ an accelerated proximal gradient algorithm to solve the robust OCSVM in the primal space. The numerical experiments on benchmark datasets and handwritten digit datasets show that the proposed method not only improves the robustness of the OCSVM , but also can reduce the computational complexity.
The support vector machine (SVM) is an increasingly important tool in machine learning. Despite its popularity, the SVM classifier can be adversely affected under the presence of noise in the training dataset. The SVM...
详细信息
The support vector machine (SVM) is an increasingly important tool in machine learning. Despite its popularity, the SVM classifier can be adversely affected under the presence of noise in the training dataset. The SVM can be fit in the regularization framework of Loss + Penalty. The loss function plays an essential role which is used to keep the fidelity of the resulting model to the data. Most SVMs use convex losses, however, they often suffer from the negative impact of points far away from their own classes. This paper proposes a new nonconvex differentiable loss, namely huberied truncated pinball loss, which can be able to reduce the effects of noise in the training sample. The SVM classifier with the huberied truncated pinball loss (HTPSVM) is proposed. The HTPSVM combines the elastic net penalty and the nonconvex huberied truncated pinball loss. It inherits the benefits of both l(1) and l(2) norm regularizers. The HTPSVM involves nonconvex minimization, the acceleratedproximalgradient (APG) algorithm was used to solve the corresponding optimization. To evaluate the performance of classifiers, classification accuracy and area under ROC curve (AUC) were employed as the accuracy indicators. The numerical results show that our new classifier is effective. Friedman and Nemenyi post hoc tests of the experimental results indicate that the proposed HTPSVM is shown to be more robust to noise than HSVM, PSVM and HHSVM.
Heavy-tailed noise or strongly correlated predictors often go with the multivariate linear regression model. To tackle with these problems, this paper focuses on the matrix elasticnet regularized multivariate Huber re...
详细信息
Heavy-tailed noise or strongly correlated predictors often go with the multivariate linear regression model. To tackle with these problems, this paper focuses on the matrix elasticnet regularized multivariate Huber regression model. This new model possesses the grouping effect property and the robustness to heavy-tailed noise. Meanwhile, it also has the ability of reducing the negative effect of outliers due to Huber loss. Furthermore, an accelerated proximal gradient algorithm is designed to solve the proposed model. Some numerical studies including a real data analysis are dedicated to show the efficiency of our method. (C) 2020 Published by Elsevier Inc.
With the appearance of approach named "robust alignment by sparse and low-rank decomposition" (RASL), a number of linearly correlated images can be accurately and robustly aligned despite significant corrupt...
详细信息
With the appearance of approach named "robust alignment by sparse and low-rank decomposition" (RASL), a number of linearly correlated images can be accurately and robustly aligned despite significant corruptions and occlusions. It has been discovered that this aligning task can be characterized as a sequence of 3-block convex minimization problems which can be solved efficiently by the acceleratedproximalgradient method (APG), or alternatively, by the directly extended alternating direction method of multipliers (ADMM). However, the directly extended ADMM may diverge although it often performs well in numerical computations. Ideally, one should find an algorithm which can have both theoretical guarantee and superior numerical efficiency over the directly extended ADMM. We achieve this goal by using the intelligent symmetric Gauss-Seidel iteration based ADMM (sGS-ADMM) which only needs to update one of the variables twice, but surprisingly, it leads to the desired convergence to be guaranteed. The convergence of sGS-ADMM can be followed directly by relating it to the classical 2-block ADMM and with a couple of specially designed semi-proximal terms. Beyond this, we also add a rank correction term to the model with the purpose of deriving the alignment results with higher accuracy. The numerical experiments over a wide range of realistic misalignments demonstrate that sGS-ADMM is at least two times faster than RASL and APG for the vast majority of the tested problems. (C) 2018 Elsevier B.V. All rights reserved.
In this paper, we mainly focus on the penalized maximum likelihood estimation of the high-dimensional approximate factor model. Since the current estimation procedure can not guarantee the positive definiteness of the...
详细信息
In this paper, we mainly focus on the penalized maximum likelihood estimation of the high-dimensional approximate factor model. Since the current estimation procedure can not guarantee the positive definiteness of the error covariance matrix, by reformulating the estimation of error covariance matrix and based on the lagrangian duality, we propose an acceleratedproximalgradient (APG) algorithm to give a positive definite estimate of the error covariance matrix. Combined the APG algorithm with EM method, a new estimation procedure is proposed to estimate the high-dimensional approximate factor model. The new method not only gives positive definite estimate of error covariance matrix but also improves the efficiency of estimation for the high-dimensional approximate factor model. Although the proposed algorithm can not guarantee a global unique solution, it enjoys a desirable non-increasing property. The efficiency of the new algorithm on estimation and forecasting is also investigated via simulation and real data analysis.
The adaptive lasso is a method for performing simultaneous parameter estimation and variable selection. The adaptive weights used in its penalty term mean that the adaptive lasso achieves the oracle property. In this ...
详细信息
The adaptive lasso is a method for performing simultaneous parameter estimation and variable selection. The adaptive weights used in its penalty term mean that the adaptive lasso achieves the oracle property. In this work, we propose an extension of the adaptive lasso named the Tukey-lasso. By using Tukey's biweight criterion, instead of squared loss, the Tukey-lasso is resistant to outliers in both the response and covariates. Importantly, we demonstrate that the Tukey-lasso also enjoys the oracle property. A fast acceleratedproximalgradient (APG) algorithm is proposed and implemented for computing the Tukey-lasso. Our extensive simulations show that the Tukey-lasso, implemented with the APG algorithm, achieves very reliable results, including for high-dimensional data where p > n. In the presence of outliers, the Tukey-lasso is shown to offer substantial improvements in performance compared to the adaptive lasso and other robust implementations of the lasso. Real-data examples further demonstrate the utility of the Tukey-lasso. Supplementary materials for this article are available online.
In this study, a 2D adaptive beamforming algorithm for sparse array is proposed. Firstly, a signal model based on matrix completion theory for adaptive beamforming in sparse array is established, which is proved to sa...
详细信息
In this study, a 2D adaptive beamforming algorithm for sparse array is proposed. Firstly, a signal model based on matrix completion theory for adaptive beamforming in sparse array is established, which is proved to satisfy null space property. Secondly, in order to enhance the performance of reconstructing complete received signal matrix, genetic algorithm is used to optimise the sparse sampling array. Thirdly, the accelerated proximal gradient algorithm is adopted to reconstruct the complete received signal matrix. Finally, the adaptive beamforming weight is provided directly to form beam patterns, which can be obtained as a result of reconstructing complete received signal matrix. The proposed method could improve the utilisation rate of the sparse array elements and reduce the computational complexity in interference suppression. Simulation results show the effectiveness of the method.
暂无评论