Two different approaches based on eigenvalues and singular values of the matrix representing the search direction in conjugate gradient algorithms are considered. Using a special approximation of the inverse Hessian o...
详细信息
Two different approaches based on eigenvalues and singular values of the matrix representing the search direction in conjugate gradient algorithms are considered. Using a special approximation of the inverse Hessian of the objective function, which depends by a positive parameter, we get the search direction which satisfies both the sufficient descent condition and the Dai-Liao's conjugacy condition. In the first approach the parameter in the search direction is determined by clustering the eigenvalues of the matrix defining it. The second approach uses the minimizing the condition number of the matrix representing the search direction. In this case the obtained conjugategradient algorithm is exactly the three-term conjugategradient algorithm proposed by Zhang, Zhou and Li. The global convergence of the algorithms is proved for uniformly convex functions. Intensive numerical experiments, using 800 unconstrained optimization test problems, prove that both these approaches have similar numerical performances. We prove that both algorithms are significantly more efficient and more robust than CG-DESCENT algorithm by Hager and Zhang. By solving five applications from the MINPACK-2 test problem collection, with 106 variables, we show that the suggested conjugate gradient algorithms are top performer versus CG-DESCENT.
An accelerated adaptive class of nonlinear conjugate gradient algorithms is suggested. The search direction in these algorithms is given by symmetrization of the scaled Perry conjugategradient direction (Perry, 1978)...
详细信息
An accelerated adaptive class of nonlinear conjugate gradient algorithms is suggested. The search direction in these algorithms is given by symmetrization of the scaled Perry conjugategradient direction (Perry, 1978), which depends on a positive parameter. The value of this parameter is determined by minimizing the distance between the symmetrical scaled Perry conjugategradient search direction matrix and the self-scaling memoryless BFGS update by Oren in the Frobenius norm. Two variants of the parameter in the search direction are presented as those given by: Oren and Luenberger (1973/74) and Oren and Spedicato (1976). The corresponding algorithm, ACGSSV, is equipped with a very well known acceleration scheme of conjugate gradient algorithms. The global convergence of the algorithm is given both for uniformly convex and general nonlinear functions under the exact or the Wolfe line search. Using a set of 800 unconstrained optimization test problems, of different structure and complexity, we prove that selection of the scaling parameter in self-scaling memoryless BFGS update leads to algorithms which substantially outperform the CG-DESCENT, SCALCG, and CONMIN conjugate gradient algorithms, being more efficient and more robust. However, the conjugategradient algorithm ADCG based on clustering the eigenvalues of the iteration matrix defined by the search direction is more efficient and slightly more robust than our ACGSSV algorithm. By solving five applications from the MINPACK-2 test problem collection with 10(6) variables, we show that the adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS update, endowed with the acceleration scheme, is top performer versus CG_DESCENT. (C) 2017 Elsevier B.V. All rights reserved.
The existing literature predominantly concentrates on the utilization of the gradient descent algorithm for control systems' design in power systems for stability enhancement. In this paper, various flavors of the...
详细信息
The existing literature predominantly concentrates on the utilization of the gradient descent algorithm for control systems' design in power systems for stability enhancement. In this paper, various flavors of the conjugategradient (CG) algorithm have been employed to design the online neuro-fuzzy linearization-based adaptive control strategy for Line Commutated Converters' (LCC) High Voltage Direct Current (HVDC) links embedded in a multi-machine test power system. The conjugate gradient algorithms are evaluated based on the damping of electro-mechanical oscillatory modes using MATLAB/Simulink. The results validate that all of the conjugate gradient algorithms have outperformed the gradient descent optimization scheme and other conventional and non-conventional control schemes.
The discretization of the Stokes equations with the mini-element yields a linear system of equations whose system matrix is symmetric and indefinite. It has two symmetric blocks, one of which is positive definite, the...
详细信息
The discretization of the Stokes equations with the mini-element yields a linear system of equations whose system matrix is symmetric and indefinite. It has two symmetric blocks, one of which is positive definite, the other negative definite. A change of sign in the second block destroys the symmetry but the resulting matrix is coercive. We discuss different conjugategradient-like algorithms for these two systems, and compare the storage and work requirements. The numerical results of several test examples are given.
conjugategradient optimization algorithms depend on the search directions, s(1) = -g(1), s(k + 1) = -g(k + 1) + beta(k)s(k), k greater-than-or-equal-to 1, with different methods arising from different choices for the...
详细信息
conjugategradient optimization algorithms depend on the search directions, s(1) = -g(1), s(k + 1) = -g(k + 1) + beta(k)s(k), k greater-than-or-equal-to 1, with different methods arising from different choices for the scalar beta(k). In this note, conditions are given on beta(k) to ensure global convergence of the resulting algorithms.
A supervised learning algorithm (Scaled conjugategradient, SCG) is introduced. The performance of SCG is benchmarked against that of the standard back propagation algorithm (BP) (Rumelhart, Hinton, & Williams, 19...
详细信息
A supervised learning algorithm (Scaled conjugategradient, SCG) is introduced. The performance of SCG is benchmarked against that of the standard back propagation algorithm (BP) (Rumelhart, Hinton, & Williams, 1986), the conjugategradient algorithm with line search (CGL) (Johansson, Dowla, & Goodman, 1990) and the one-step Broyden-Fletcher-Goldfarb-Shanno memoriless quasi-Newton algorithm (BFGS) (Battiti, 1990). SCG is fully-automated, includes no critical user-dependent parameters, and avoids a time consuming line search, which CGL and BFGS use in each iteration in order to determine an appropriate step size. Experiments show that SCG is considerably faster than BP, CGL, and BFGS.
The Spectral conjugategradient (SCG) methods are among the efficient variants of CG algorithms which are obtained by combining the spectral gradient parameter and CG parameter. The success of SCG methods relies on ef...
详细信息
The Spectral conjugategradient (SCG) methods are among the efficient variants of CG algorithms which are obtained by combining the spectral gradient parameter and CG parameter. The success of SCG methods relies on effective choices of the step-size alpha(k) and the search direction d(k). This paper presents an SCG method for unconstrained optimization models. The search directions generated by the new method possess sufficient descent property without the restart condition and independent of the line search procedure used. The global convergence of the new method is proved under the weak Wolfe line search. Preliminary numerical results are presented which show that the method is efficient and promising, particularly for large-scale problems. Also, the method was applied to solve the robotic motion control problem and portfolio selection problem.
A new value for the parameter in Dai and Liao conjugategradient algorithm is presented. This is based on the clustering of eigenvalues of the matrix which determine the search direction of this algorithm. This value ...
详细信息
A new value for the parameter in Dai and Liao conjugategradient algorithm is presented. This is based on the clustering of eigenvalues of the matrix which determine the search direction of this algorithm. This value of the parameter lead us to a variant of the Dai and Liao algorithm which is more efficient and more robust than the variants of the same algorithm based on minimizing the condition number of the matrix associated to the search direction. Global convergence of this variant of the algorithm is briefly discussed.
Based on the condition for equivalence between linearly constrained minimum-variance (LCMV) filters and their generalized sidelobe canceler (GSC) implementations, we derive the new constrained conjugategradient (CCG)...
详细信息
Based on the condition for equivalence between linearly constrained minimum-variance (LCMV) filters and their generalized sidelobe canceler (GSC) implementations, we derive the new constrained conjugategradient (CCG) algorithm. We discuss the use of orthogonal and nonorthogonal blocking matrices for the GSC structure and how the choice of this matrix may affect the relationship with the LCMV counterpart. The newly derived algorithm was tested in a computer experiment for adaptive multiuser detection and showed excellent results.
A neural network approach is proposed for the automated classification of the normal and abnormal EGG. Two learning algorithms, the quasi-Newton and the scaled conjugategradient method for the multilayer feedforward ...
详细信息
A neural network approach is proposed for the automated classification of the normal and abnormal EGG. Two learning algorithms, the quasi-Newton and the scaled conjugategradient method for the multilayer feedforward neural networks (MFNN), are introduced and compared with the error backpropagation algorithm. The configurations of the MFNN are determined by experiment. The raw EGG data, its power spectral data, and its autoregressive moving average (ARMA) modelling parameters are used as the input to the MFNN and compared with each other. Three indexes (the percent correct, sum-squared error and complexity per iteration) are used to evaluate the performance of each learning algorithm. The results show that the scaled conjugategradient algorithm performs best, in that it is robust and provides a super-linear convergence rate. The power spectral representation and the ARMA modelling parameters of the EGG are found to be better types of the input to the network for this specific application, both yielding a percent correctness of 95% on the test set. Although the results are focused on the classification of the EGG, this paper should provide useful information for the classification of other biomedical signals.
暂无评论