Based on an eigenvalue study, a descent class of Dai-Liao conjugategradient methods is proposed. An interesting feature of the proposed class is its inclusion of the efficient nonlinear conjugategradient methods pro...
详细信息
Based on an eigenvalue study, a descent class of Dai-Liao conjugategradient methods is proposed. An interesting feature of the proposed class is its inclusion of the efficient nonlinear conjugategradient methods proposed by Hager and Zhang, and Dai and Kou, as special cases. It is shown that the methods of the suggested class are globally convergent for uniformly convex objective functions. Numerical results are reported, they demonstrate the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and More.
In Andrei (2017), a class of efficient conjugate gradient algorithms (ACGSSV) is proposed for solving large-scale unconstrained optimization problems. However, due to a wrong inequality and an incorrect reasoning used...
详细信息
In Andrei (2017), a class of efficient conjugate gradient algorithms (ACGSSV) is proposed for solving large-scale unconstrained optimization problems. However, due to a wrong inequality and an incorrect reasoning used in analyzing the global convergence property for the proposed algorithm, the proof of Theorem 4.2, the global convergence theorem, is incorrect. In this paper, the necessary corrections are made. Under common assumptions, it is shown that algorithm ACGSSV converges linearly to the unique minimizer. (C) 2017 Elsevier B.V. All rights reserved.
In order to take advantage of the attractive features of the Hestenes-Stiefel and Dai-Yuan conjugategradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing ...
详细信息
In order to take advantage of the attractive features of the Hestenes-Stiefel and Dai-Yuan conjugategradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager-Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and More.
Based on an eigenvalue study, the sufficient descent condition of an extended class of the Hager-Zhang nonlinear conjugategradient methods is established. As an interesting result, it is shown that the search directi...
详细信息
Based on an eigenvalue study, the sufficient descent condition of an extended class of the Hager-Zhang nonlinear conjugategradient methods is established. As an interesting result, it is shown that the search directions of the CG_Descent algorithm satisfy the sufficient descent condition d(k)(T) g(k) < -7/8 parallel to g(k)parallel to(2).
This paper is concerned with proving theoretical results related to the convergence of the conjugategradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projecti...
详细信息
This paper is concerned with proving theoretical results related to the convergence of the conjugategradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A-norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer algorithms 2005;40:201-216). The proofs of our results rely on the well-known correspondence between the CG method and the Lanczos algorithm. Copyright (C) 2008 John Wiley & Sons, Ltd.
In this paper we present an implementation method for the conjugate gradient algorithm with geometric parallelization, also called domain decomposition. The results of some experiments on the Parsytec GCel are present...
详细信息
In this paper we present an implementation method for the conjugate gradient algorithm with geometric parallelization, also called domain decomposition. The results of some experiments on the Parsytec GCel are presented and we discuss further improvements for the implementation of CG on the Parsytec.
In order to take advantage of the attractive features of Polak-RibiSre-Polyak and Fletcher-Reeves conjugategradient methods, two hybridizations of these methods are suggested, using a quadratic relaxation of a hybrid...
详细信息
In order to take advantage of the attractive features of Polak-RibiSre-Polyak and Fletcher-Reeves conjugategradient methods, two hybridizations of these methods are suggested, using a quadratic relaxation of a hybrid conjugategradient parameter proposed by Gilbert and Nocedal. In the suggested methods, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, it is shown that the proposed methods are globally convergent for general objective functions. Numerical results are reported;they demonstrate the efficiency of one of the proposed methods in the sense of the performance profile introduced by Dolan and Mor,.
A revision on condition (27) of Lemma 3.2 of Babaie-Kafaki (J. Optim. Theory Appl. 154(3):916-932, 2012) is made. Throughout, we use the same notation and equation numbers as in Babaie-Kafaki (J. Optim. Theory Appl. 1...
详细信息
A revision on condition (27) of Lemma 3.2 of Babaie-Kafaki (J. Optim. Theory Appl. 154(3):916-932, 2012) is made. Throughout, we use the same notation and equation numbers as in Babaie-Kafaki (J. Optim. Theory Appl. 154(3):916-932, 2012).
We extend a results presented by Y.F. Hu and *** (1991) [1] on the global convergence result for conjugategradient methods with different choices for the parameter β k . In this note, the conditions ...
详细信息
We extend a results presented by Y.F. Hu and *** (1991) [1] on the global convergence result for conjugategradient methods with different choices for the parameter β k . In this note, the conditions given on β k are milder than that used by Y.F. Hu and C. Storey.
The most efficient signal edge-preserving smoothing filters, e.g., for denoising, are non-linear. Thus, their acceleration is challenging and is often performed in practice by tuning filter parameters, such as by incr...
详细信息
ISBN:
(纸本)9781479975914
The most efficient signal edge-preserving smoothing filters, e.g., for denoising, are non-linear. Thus, their acceleration is challenging and is often performed in practice by tuning filter parameters, such as by increasing the width of the local smoothing neighborhood, resulting in more aggressive smoothing of a single sweep at the cost of increased edge blurring. We propose an alternative technology, accelerating the original filters without tuning, by running them through a special conjugategradient method, not affecting their quality. The filter non-linearity is dealt with by careful freezing and restarting. Our initial numerical experiments on toy onedimensional signals demonstrate 20x acceleration of the classical bilateral filter and 3-5x acceleration of the recently developed guided filter.
暂无评论