The exponential growth in the scale of power systems has led to a significant increase in the complexity of dispatch problem resolution,particularly within multi-area interconnected power *** complexity necessitates t...
详细信息
The exponential growth in the scale of power systems has led to a significant increase in the complexity of dispatch problem resolution,particularly within multi-area interconnected power *** complexity necessitates the employment of distributed solution methodologies,which are not only essential but also highly *** the realm of computational modelling,the multi-area economic dispatch problem(MAED)can be formulated as a linearly constrained separable convex optimization *** proximal point algorithm(PPA)is particularly adept at addressing such mathematical constructs *** study introduces parallel(PPPA)and serial(SPPA)variants of the PPA as distributed algorithms,specifically designed for the computational modelling of the *** PPA introduces a quadratic term into the objective function,which,while potentially complicating the iterative updates of the algorithm,serves to dampen oscillations near the optimal solution,thereby enhancing the convergence ***,the convergence efficiency of the PPA is significantly influenced by the parameter *** address this parameter sensitivity,this research draws on trend theory from stock market analysis to propose trend theory-driven distributed PPPA and SPPA,thereby enhancing the robustness of the computational *** computational models proposed in this study are anticipated to exhibit superior performance in terms of convergence behaviour,stability,and robustness with respect to parameter selection,potentially outperforming existing methods such as the alternating direction method of multipliers(ADMM)and Auxiliary Problem Principle(APP)in the computational simulation of power system dispatch *** simulation results demonstrate that the trend theory-based PPPA,SPPA,ADMM and APP exhibit significant robustness to the initial value of parameter c,and show superior convergence characteristics compared to the residual balancing ADMM.
We introduce a class of specially structured linear programming (LP) problems, which has favorable modeling capability for important application problems in different areas such as optimal transport, discrete tomograp...
详细信息
We introduce a class of specially structured linear programming (LP) problems, which has favorable modeling capability for important application problems in different areas such as optimal transport, discrete tomography, and economics. To solve these generally large-scale LP problems efficiently, we design an implementable inexact entropic proximal point algorithm (iEPPA) combined with an easy-to-implement dual block coordinate descent method as a subsolver. Unlike existing entropy-type proximal point algorithms, our iEPPA employs a more practically checkable stopping condition for solving the associated subproblems while achieving provable convergence. Moreover, when solving the capacity constrained multi-marginal optimal transport (CMOT) problem (a special case of our LP problem), our iEPPA is able to bypass the underlying numerical instability issues that often appear in the popular entropic regularization approach, since our algorithm does not require the proximal parameter to be very small in order to obtain an accurate approximate solution. Numerous numerical experiments show that our iEPPA is efficient and robust for solving some large-scale CMOT problems on synthetic data. The preliminary experiments on the discrete tomography problem also highlight the potential modeling capability of our model.
Finding a zero of a maximal monotone operator is fundamental in convex optimization and monotone operator theory, and proximal point algorithm (PPA) is a primary method for solving this problem. PPA converges not only...
详细信息
Finding a zero of a maximal monotone operator is fundamental in convex optimization and monotone operator theory, and proximal point algorithm (PPA) is a primary method for solving this problem. PPA converges not only globally under fairly mild conditions but also asymptotically at a fast linear rate provided that the underlying inverse operator is Lipschitz continuous at the origin. These nice convergence properties are preserved by a relaxed variant of PPA. Recently, a linear convergence bound was established in [M. Tao, and X. M. Yuan, J. Sci. Comput., 74 (2018), pp. 826-850] for the relaxed PPA, and it was shown that the bound is tight when the relaxation factor gamma lies in [1, 2). However, for other choices of gamma, the bound obtained by Tao and Yuan is suboptimal. In this paper, we establish tight linear convergence bounds for any choice of gamma is an element of (0, 2) using a unified and much simplified analysis. These results sharpen our understandings to the asymptotic behavior of the relaxed PPA and make the whole picture for gamma is an element of (0, 2) clear.
This paper designs and studies a numerically fast proximal point algorithm to solve the monotone inclusion problem in Hilbert spaces. Our first proposed algorithm combines the proximal point algorithm, inertial term, ...
详细信息
This paper designs and studies a numerically fast proximal point algorithm to solve the monotone inclusion problem in Hilbert spaces. Our first proposed algorithm combines the proximal point algorithm, inertial term, and two correction terms. The addition of two correction terms is considered to further numerically accelerate the convergence speed of the inertial proximal point algorithm already studied in the literature. In our convergence results, we obtain both weak and linear convergence of the first proposed algorithm under some standard assumptions. Furthermore, we modify the first algorithm to obtain a strongly convergent algorithm. Applications of our proposed algorithm to mixed variational inequalities, strongly quasi-convex minimization problems, and Douglas-Rachford splitting algorithm are given. Numerical results show that our proposed algorithm outperforms other related algorithms in the literature.
The proximal point algorithm finds a zero of a maximal monotone mapping by iterations in which the mapping is made strongly monotone by the addition of a proximal term. Here it is articulated with the norm behind the ...
详细信息
The proximal point algorithm finds a zero of a maximal monotone mapping by iterations in which the mapping is made strongly monotone by the addition of a proximal term. Here it is articulated with the norm behind the proximal term possibly shifting from one iteration to the next, but under conditions that eventually make the metric settle down. Despite the varying geometry, the sequence generated by the algorithm is shown to converge to a particular solution. Although this is not the first variable-metric extension of proximal point algorithm, it is the first to retain the flexibility needed for applications to augmented Lagrangian methodology and progressive decoupling. Moreover, in a generic sense, the convergence it generates is Q-linear at a rate that depends in a simple way on the modulus of metric subregularity of the mapping at that solution. This is a tighter rate than previously identified and reveals for the first time the definitive role of metric subregularity in how the proximal point algorithm performs, even in fixed-metric mode.
作者:
Liu, Yong-JinYu, JingFuzhou Univ
Ctr Appl Math Fujian Prov Sch Math & Stat Fuzhou 350108 Fujian Peoples R China Fuzhou Univ
Sch Math & Stat Fuzhou 350108 Fujian Peoples R China
The maximum eigenvalue problem is to minimize the maximum eigenvalue function over an affine subspace in a symmetric matrix space, which has many applications in structural engineering, such as combinatorial optimizat...
详细信息
The maximum eigenvalue problem is to minimize the maximum eigenvalue function over an affine subspace in a symmetric matrix space, which has many applications in structural engineering, such as combinatorial optimization, control theory and structural design. Based on classical analysis of proximalpoint (Ppa) algorithm and semismooth analysis of nonseparable spectral operator, we propose an efficient semismooth Newton based dual proximalpoint (Ssndppa) algorithm to solve the maximum eigenvalue problem, in which an inexact semismooth Newton (Ssn) algorithm is applied to solve inner subproblem of the dual proximalpoint (d-Ppa) algorithm. Global convergence and locally asymptotically superlinear convergence of the d-Ppa algorithm are established under very mild conditions, and fast superlinear or even quadratic convergence of the Ssn algorithm is obtained when the primal constraint nondegeneracy condition holds for the inner subproblem. Computational costs of the Ssn algorithm for solving the inner subproblem can be reduced by fully exploiting low-rank or high-rank property of a matrix. Numerical experiments on max-cut problems and randomly generated maximum eigenvalue optimization problems demonstrate that the Ssndppa algorithm substantially outperforms the Sdpnal+ solver and several state-of-the-art first-order algorithms.
This paper deals with the proximal point algorithm for finding a singularity of sum of a single-valued vector field and a set-valued vector field in the setting of Hadamard manifolds. The convergence analysis of the p...
详细信息
This paper deals with the proximal point algorithm for finding a singularity of sum of a single-valued vector field and a set-valued vector field in the setting of Hadamard manifolds. The convergence analysis of the proposed algorithm is discussed. Applications to composite minimization problems and variational inequality problems are also presented.
This paper proposes a two-point inertial proximal point algorithm to find zero of maximal monotone operators in Hilbert spaces. We obtain weak convergence results and non-asymptotic O(1/n) convergence rate of our prop...
详细信息
This paper proposes a two-point inertial proximal point algorithm to find zero of maximal monotone operators in Hilbert spaces. We obtain weak convergence results and non-asymptotic O(1/n) convergence rate of our proposed algorithm in non-ergodic sense. Applications of our results to various well-known convex optimization methods, such as the proximal method of multipliers and the alternating direction method of multipliers are given. Numerical results are given to demonstrate the accelerating behaviors of our method over other related methods in the literature. (C) 2022 IMACS. Published by Elsevier B.V. All rights reserved.
In a recent paper, Bauschke et al. study rho-comonotonicity as a generalized notion of monotonicity of set-valued operators A in Hilbert space and characterize this condition on A in terms of the averagedness of its r...
详细信息
In a recent paper, Bauschke et al. study rho-comonotonicity as a generalized notion of monotonicity of set-valued operators A in Hilbert space and characterize this condition on A in terms of the averagedness of its resolvent J(A). In this note we show that this result makes it possible to adapt many proofs of properties of the proximal point algorithm PPA and its strongly convergent Halpern-type variant HPPA to this more general class of operators. This also applies to quantitative results on the rates of convergence or metastability (in the sense of T. Tao). E.g. using this approach we get a simple proof for the convergence of the PPA in the boundedly compact case for rho-comonotone operators and obtain an effective rate of metastability. If A has a modulus of regularity w.r.t. zer A we also get a rate of convergence to some zero of A even without any compactness assumption. We also study a Halpern-type variant HPPA of the PPA for rho-comonotone operators, prove its strong convergence (without any compactness or regularity assumption) and give a rate of metastability.
Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functio...
详细信息
Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functions which constitute the backbone of nonconvex programming and global optimization. In this article, we introduce new algorithm to minimize the difference of a continuously differentiable function and a convex function that accelerate the convergence of the classical proximal point algorithm. We prove that the point computed by proximal point algorithm can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of proximal point algorithm together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analyzed under the strong Kurdyka-Lojasiewicz property of the objective function.
暂无评论