Based on an eigenvalue study, a descent class of Dai-Liao conjugategradient methods is proposed. An interesting feature of the proposed class is its inclusion of the efficient nonlinear conjugategradient methods pro...
详细信息
Based on an eigenvalue study, a descent class of Dai-Liao conjugategradient methods is proposed. An interesting feature of the proposed class is its inclusion of the efficient nonlinear conjugategradient methods proposed by Hager and Zhang, and Dai and Kou, as special cases. It is shown that the methods of the suggested class are globally convergent for uniformly convex objective functions. Numerical results are reported, they demonstrate the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and More.
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugategradient method. Connections ...
详细信息
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugategradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so called learning rate and momentum parameters are obtained using a control Liapunov function analysis of the system. (C) 2003 Elsevier Ltd. All rights reserved.
In Andrei (2017), a class of efficient conjugate gradient algorithms (ACGSSV) is proposed for solving large-scale unconstrained optimization problems. However, due to a wrong inequality and an incorrect reasoning used...
详细信息
In Andrei (2017), a class of efficient conjugate gradient algorithms (ACGSSV) is proposed for solving large-scale unconstrained optimization problems. However, due to a wrong inequality and an incorrect reasoning used in analyzing the global convergence property for the proposed algorithm, the proof of Theorem 4.2, the global convergence theorem, is incorrect. In this paper, the necessary corrections are made. Under common assumptions, it is shown that algorithm ACGSSV converges linearly to the unique minimizer. (C) 2017 Elsevier B.V. All rights reserved.
Following Andrei's approach, a modified scaled memoryless BFGS preconditioned conjugategradient method is proposed based on the modified secant equation suggested by Li and Fukushima. It is shown that the method ...
详细信息
Following Andrei's approach, a modified scaled memoryless BFGS preconditioned conjugategradient method is proposed based on the modified secant equation suggested by Li and Fukushima. It is shown that the method is globally convergent without convexity assumption on the objective function. Furthermore, for uniformly convex objective functions, sufficient descent property of the method is established based on an eigenvalue analysis. Numerical experiments are employed to demonstrate the efficiency of the method.
In order to take advantage of the attractive features of the Hestenes-Stiefel and Dai-Yuan conjugategradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing ...
详细信息
In order to take advantage of the attractive features of the Hestenes-Stiefel and Dai-Yuan conjugategradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager-Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and More.
Based on an eigenvalue study, the sufficient descent condition of an extended class of the Hager-Zhang nonlinear conjugategradient methods is established. As an interesting result, it is shown that the search directi...
详细信息
Based on an eigenvalue study, the sufficient descent condition of an extended class of the Hager-Zhang nonlinear conjugategradient methods is established. As an interesting result, it is shown that the search directions of the CG_Descent algorithm satisfy the sufficient descent condition d(k)(T) g(k) < -7/8 parallel to g(k)parallel to(2).
We have studied previously a generalized conjugategradient method for solving sparse positive-definite systems of linear equations arising from the discretization of elliptic partial-differential boundary-value probl...
详细信息
We have studied previously a generalized conjugategradient method for solving sparse positive-definite systems of linear equations arising from the discretization of elliptic partial-differential boundary-value problems. Here, extensions to the nonlinear case are considered. We split the original discretized operator into the sum of two operators, one of which corresponds to a more easily solvable system of equations, and accelerate the associated iteration based on this splitting by (nonlinear) conjugategradients. The behavior of the method is illustrated for the minimal surface equation with splittings corresponding to nonlinear SSOR, to approximate factorization of the Jacobian matrix, and to elliptic operators suitable for use with fast direct methods. The results of numerical experiments are given as well for a mildy nonlinear example, for which, in the corresponding linear case, the finite termination property of the conjugate gradient algorithm is crucial.
This paper is concerned with proving theoretical results related to the convergence of the conjugategradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projecti...
详细信息
This paper is concerned with proving theoretical results related to the convergence of the conjugategradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A-norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer algorithms 2005;40:201-216). The proofs of our results rely on the well-known correspondence between the CG method and the Lanczos algorithm. Copyright (C) 2008 John Wiley & Sons, Ltd.
In this paper we present an implementation method for the conjugate gradient algorithm with geometric parallelization, also called domain decomposition. The results of some experiments on the Parsytec GCel are present...
详细信息
In this paper we present an implementation method for the conjugate gradient algorithm with geometric parallelization, also called domain decomposition. The results of some experiments on the Parsytec GCel are presented and we discuss further improvements for the implementation of CG on the Parsytec.
In order to take advantage of the attractive features of Polak-RibiSre-Polyak and Fletcher-Reeves conjugategradient methods, two hybridizations of these methods are suggested, using a quadratic relaxation of a hybrid...
详细信息
In order to take advantage of the attractive features of Polak-RibiSre-Polyak and Fletcher-Reeves conjugategradient methods, two hybridizations of these methods are suggested, using a quadratic relaxation of a hybrid conjugategradient parameter proposed by Gilbert and Nocedal. In the suggested methods, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, it is shown that the proposed methods are globally convergent for general objective functions. Numerical results are reported;they demonstrate the efficiency of one of the proposed methods in the sense of the performance profile introduced by Dolan and Mor,.
暂无评论