In this paper, we propose a new descent method, called multiobjective memory gradi-ent method, for finding Pareto critical points of a multiobjective optimization problem. The main thought in this method is to select ...
详细信息
In this paper, we propose a new descent method, called multiobjective memory gradi-ent method, for finding Pareto critical points of a multiobjective optimization problem. The main thought in this method is to select a combination of the current descent direc-tion and past multi-step iterative information as a new search direction and to obtain a stepsize by two types of strategies. It is proved that the developed direction with suit-able parameters always satisfies the sufficient descent condition at each iteration. Based on mild assumptions, we obtain the global convergence and the rates of convergence for our method. Computational experiments are given to demonstrate the effectiveness of the proposed method. (c) 2022 Elsevier Inc. All rights reserved.
Based on the Moreau-Yosida regularization and a modified line search technique, this paper presents an implementable memory gradient method for solving a possibly non-differentiable convex minimization problem by conv...
详细信息
Based on the Moreau-Yosida regularization and a modified line search technique, this paper presents an implementable memory gradient method for solving a possibly non-differentiable convex minimization problem by converting the original objective function to a once continuously differentiable function. A main feature of this proposed method is that at each iteration, it sufficiently uses the previous multi-step iterative information and avoids the storage and computation of some matrices. Moreover, the proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. Under reasonable conditions, the convergence properties of the proposed algorithm are analysed. Preliminary numerical results show that the proposed method is efficient and can be applied to solve large-scale non-smooth optimization problems.
In this paper, we present a new memory gradient method such that the direction generated by this method provides a sufficient descent direction for the objective function at every iteration. Then, we analyze its globa...
详细信息
In this paper, we present a new memory gradient method such that the direction generated by this method provides a sufficient descent direction for the objective function at every iteration. Then, we analyze its global convergence under mild conditions and convergence rate for uniformly convex functions. Finally, we report some numerical results to show the efficiency of the proposed method.
In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direc...
详细信息
In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memorymethod is efficient for the given test problems.
Based on nonmonotone Armijo line search,the paper proposes a new nonmonotone line search and investigates a memory gradient method with this line *** global convergence is also proved under some mild *** compared with...
详细信息
Based on nonmonotone Armijo line search,the paper proposes a new nonmonotone line search and investigates a memory gradient method with this line *** global convergence is also proved under some mild *** compared with nonmonotone Armijo rule,the new nonmonotone line search can effectively reduce the function evaluations by choosing a larger accepted stepsize at each iteration so as to reduce the computation of algorithm.
In this paper, we present a multi-step memory gradient method with Goldstein line search for unconstrained optimization problems and prove its global convergence under some mild conditions. We also prove the linear co...
详细信息
In this paper, we present a multi-step memory gradient method with Goldstein line search for unconstrained optimization problems and prove its global convergence under some mild conditions. We also prove the linear convergence rate of the new method when the objective function is uniformly convex. Numerical results show that the new algorithm is suitable to solve large-scale optimization problems and is more stable than other similar methods in practical computation. (c) 2007 Elsevier Ltd. All rights reserved.
In this paper, we develop an adaptive nonmonotone memory gradient method for unconstrained optimization. The novelty of this method is that the stepsize can be adjusted according to the characteristics of the objectiv...
详细信息
In this paper, we develop an adaptive nonmonotone memory gradient method for unconstrained optimization. The novelty of this method is that the stepsize can be adjusted according to the characteristics of the objective function. We show the strong global convergence of the proposed method without requiring Lipschitz continuous of the gradient. Our numerical experiments indicate the method is very encouraging. (c) 2006 Elsevier Inc. All rights reserved.
memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this ...
详细信息
memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each ...
详细信息
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.
暂无评论