In this article, we use the superiorization methodology to investigate the bounded perturbations resilience of the gradient projection algorithm proposed in Erturk et al. (J Nonlinear Convex Anal 21(4):943-951, 2020) ...
详细信息
In this article, we use the superiorization methodology to investigate the bounded perturbations resilience of the gradient projection algorithm proposed in Erturk et al. (J Nonlinear Convex Anal 21(4):943-951, 2020) for solving the convex minimization problem in Hilbert space setting. We obtain that the perturbed version of this gradient projection algorithm converges weakly to a solution of the convex minimization problem just like itself. We support our conclusion with an example in an infinitely dimensional Hilbert space. We also show that the superiorization methodology can be applied to the split feasibility and the inverse linear problems with the help of the perturbed gradient projection algorithm.
In this paper, we study convergence analysis of a new gradient projection algorithm for solving convex minimization problems in Hilbert spaces. We observe that the proposed gradient projection algorithm weakly converg...
详细信息
In this paper, we study convergence analysis of a new gradient projection algorithm for solving convex minimization problems in Hilbert spaces. We observe that the proposed gradient projection algorithm weakly converges to a minimum of convex function f which is defined from a closed and convex subset of a Hilbert space to Double-struck capital R. Also, we give a nontrivial example to illustrate our result in an infinite dimensional Hilbert space. We apply our result to solve the split feasibility problem.
We consider the problem of minimization for a function with Lipschitz continuous gradient on a proximally smooth and smooth manifold in a finite dimensional Euclidean space. We consider the Lezanski-Polyak-Lojasiewicz...
详细信息
We consider the problem of minimization for a function with Lipschitz continuous gradient on a proximally smooth and smooth manifold in a finite dimensional Euclidean space. We consider the Lezanski-Polyak-Lojasiewicz (LPL) conditions in this problem of constrained optimization. We prove that the gradient projection algorithm for the problem converges with a linear rate when the LPL condition holds.
We consider the minimization problem for a nonconvex function with Lipschitz continuous gradient on a proximally smooth (possibly nonconvex) subset of a finite-dimensional Euclidean space. We introduce the error bound...
详细信息
We consider the minimization problem for a nonconvex function with Lipschitz continuous gradient on a proximally smooth (possibly nonconvex) subset of a finite-dimensional Euclidean space. We introduce the error bound condition with exponent alpha is an element of(0, 1] for the gradient mapping. Under this condition, it is shown that the standard gradient projection algorithm converges to a solution of the problem linearly or sublinearly, depending on the value of the exponent alpha. This paper is theoretical. Bibliography: 23 titles.
In this paper, we consider the varying stepsize gradient projection algorithm (GPA) for solving the split equality problem (SEP) in Hilbert spaces, and study its linear convergence. In particular, we introduce a notio...
详细信息
In this paper, we consider the varying stepsize gradient projection algorithm (GPA) for solving the split equality problem (SEP) in Hilbert spaces, and study its linear convergence. In particular, we introduce a notion of bounded linear regularity property for the SEP, and use it to establish the linear convergence property for the varying stepsize GPA. We provide some mild sufficient conditions to ensure the bounded linear regularity property, and then conclude the linear convergence rate of the varying stepsize GPA. To the best of our knowledge, this is the first work to study the linear convergence for the SEP.
Aiming at the problems of selection parameter step-size and premature convergence that occurred when searching the local area in the optimal design of adaptive gradient projection algorithm in this paper, adaptive var...
详细信息
Aiming at the problems of selection parameter step-size and premature convergence that occurred when searching the local area in the optimal design of adaptive gradient projection algorithm in this paper, adaptive variable step-size mechanism strategy and adaptive variable step-size mechanism were established. They were introduced into the gradient projection algorithm, and were used to control iteration step length. Through the examples of non-probabilistic reliability index, it can be showed that the method could quickly and accurately calculate the reliability index when the model had multiple variables and complex limit state function. To compare and contrast this algorithm with the simple gradient projection algorithm, this algorithm is not sensitive to the initial point position. And it not only takes into account both local performance and global search ability, but also has fast convergence speed and high precision. So it is an efficient and practical optimization algorithm.
In this paper, we propose an incremental gradient projection algorithm for solving a minimization problem over the intersection of a finite family of closed convex subsets of a Hilbert space where the objective functi...
详细信息
In this paper, we propose an incremental gradient projection algorithm for solving a minimization problem over the intersection of a finite family of closed convex subsets of a Hilbert space where the objective function is the sum of component functions. This algorithm is parameterized by a single nonnegative constant i mu. If mu = 0, then the proposed algorithm reduces to the classical incremental gradient method. The weak convergence of the sequence generated by the proposed algorithm is studied if the step size is chosen appropriately. Furthermore, in the special case of constrained least squares problem, the sequence generated by the proposed algorithm is proved to be convergent strongly to a solution of the constrained least squares problem under less requirements for the step size.
This paper presents a novel flow update policy, namely the successive over relaxation (SOR) iteration method, which can be implemented in traffic assignment algorithms. Most existing so-lution algorithms for the user ...
详细信息
This paper presents a novel flow update policy, namely the successive over relaxation (SOR) iteration method, which can be implemented in traffic assignment algorithms. Most existing so-lution algorithms for the user equilibrium traffic assignment problem (UE-TAP) mainly use two flow update policies: Jacobi and Gauss-Seidel iteration methods. The proposed flow update policy SOR can be a more efficient replacement. Following the path-based gradientprojection (GP) algorithm, we developed a new method GP-SOR for the UE-TAP. This study first provides the complete procedure of applying the GP-SOR algorithm to solve the UE-TAP. Subsequently, a few properties of the proposed method are rigorously proven. However, empirical tests of the GP-SOR algorithm demonstrate serious oscillations and poor convergence. To cope with this problem, the Armijo Rule is employed to determine the relaxation factor, which substantially improves the convergence of GP-SOR algorithm. The preliminary numerical examples show that the GP-SOR algorithm has speedier convergence compared with the known alternatives, which is reflected by the evident reduction of the computing time and the number of iterations.
In this paper, we introduce a self-adaptive inertial gradient projection algorithm for solving monotone or strongly pseudomonotone variational inequalities in real Hilbert spaces. The algorithm is designed such that t...
详细信息
In this paper, we introduce a self-adaptive inertial gradient projection algorithm for solving monotone or strongly pseudomonotone variational inequalities in real Hilbert spaces. The algorithm is designed such that the stepsizes are dynamically chosen and its convergence is guaranteed without the Lipschitz continuity and the paramonotonicity of the underlying operator. We will show that the proposed algorithm yields strong convergence without being combined with the hybrid/viscosity or linesearch methods. Our results improve and develop previously discussed gradientprojection-type algorithms by Khanh and Vuong (J. Global Optim. 58, 341-350 2014).
We analyse the convergence of the gradient projection algorithm, which is finalized with the Newton method, to a stationary point for the problem of nonconvex constrained optimization minx. S f (x) with a proximally s...
详细信息
We analyse the convergence of the gradient projection algorithm, which is finalized with the Newton method, to a stationary point for the problem of nonconvex constrained optimization minx. S f (x) with a proximally smooth set S = {x is an element of R-n : g(x) = 0}, g : R-n -> R-m and a smooth function f. Wepropose new Error bound (EB) conditions for the gradientprojection method which lead to the convergence domain of the Newton method. We prove that these EB conditions are typical for a wide class of optimization problems. It is possible to reach high convergence rate of the algorithm by switching to the Newton method.
暂无评论