Small-Signal Stability Constrained Optimal Power Flow (SSSC-OPF) can provide additional stability measures and control strategies to guarantee the system to be small-signal stable. However, due to the nonsmooth proper...
详细信息
Small-Signal Stability Constrained Optimal Power Flow (SSSC-OPF) can provide additional stability measures and control strategies to guarantee the system to be small-signal stable. However, due to the nonsmooth property of the spectral abscissa function, existing algorithms solving SSSC-OPF cannot guarantee convergence. To tackle this computational challenge of SSSC-OPF, we propose a Sequential Quadratic Programming (SQP) method combined with gradient sampling for SSSC-OPF. At each iteration of the proposed SQP, the gradient of the spectral abscissa function is randomly sampled at the current iterate and additional nearby points to make the search direction computation effective in nonsmooth regions. The method can guarantee SSSC-OPF is globally and efficiently convergent to stationary pointswith probability one. The effectiveness of the proposed method is tested and validated on WSCC 3-machine 9-bus system, New England 10-machine 39-bus system, and IEEE 54-machine 118-bus system.
In this paper, we construct a modified gradient sampling method for solving a type of nonsmooth semi-infinite optimization problem. The algorithm is grounded in the modified ideal direction, a subgradient computed in ...
详细信息
In this paper, we construct a modified gradient sampling method for solving a type of nonsmooth semi-infinite optimization problem. The algorithm is grounded in the modified ideal direction, a subgradient computed in the convex hull of some sampling points. In addition, we discretize the semi-infinite optimization problem as a finite constraint problem based on the modified adaptive discretization method, ensure the convergence of the algorithm with respect to the discretization problem, and diminish the number of evaluations of the constraint function. Moreover, we establish the theoretical convergence of the algorithm under suitable assumptions. Finally, we establish numerical results by applying algorithms and demonstrating that the new algorithm has advantages over the others.
We study the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function f on R-n that is continuously differentiable on an open dense subset. We strengthen the existing conver...
详细信息
We study the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function f on R-n that is continuously differentiable on an open dense subset. We strengthen the existing convergence results for this algorithm and introduce a slightly revised version for which stronger results are established without requiring compactness of the level sets of f. In particular, we show that with probability 1 the revised algorithm either drives the f- values to -infinity, or each of its cluster points is Clarke stationary for f. We also consider a simplified variant in which the differentiability check is skipped and the user can control the number of f- evaluations per iteration.
In this paper, we combine the positive aspects of the gradient sampling (GS) and bundle methods, as the most efficient methods in nonsmooth optimization, to develop a robust method for solving unconstrained nonsmooth ...
详细信息
In this paper, we combine the positive aspects of the gradient sampling (GS) and bundle methods, as the most efficient methods in nonsmooth optimization, to develop a robust method for solving unconstrained nonsmooth convex optimization problems. The main aim of the proposed method is to take advantage of both GS and bundle methods, meanwhile avoiding their drawbacks. At each iteration of this method, to find an efficient descent direction, the GS technique is utilized for constructing a local polyhedral model for the objective function. If necessary, via an iterative improvement process, this initial polyhedral model is improved by some techniques inspired by the bundle and GS methods. The convergence of the method is studied, which reveals that the global convergence property of our method is independent of the number of gradient evaluations required to establish and improve the initial polyhedral models. Thus, the presented method needs much fewer gradient evaluations in comparison to the original GS method. Furthermore, by means of numerical simulations, we show that the presented method provides promising results in comparison with GS methods, especially for large scale problems. Moreover, in contrast with some bundle methods, our method is not very sensitive to the accuracy of supplied gradients.
This paper presents a nonsmooth optimization-based method to optimally tune wide-area damping controllers in large delayed cyber-physical power system (DCPPS). The objective is to maximize the damping ratios of the we...
详细信息
This paper presents a nonsmooth optimization-based method to optimally tune wide-area damping controllers in large delayed cyber-physical power system (DCPPS). The objective is to maximize the damping ratios of the weakly damped interarea oscillation modes under multiple operating conditions. An eigenvalue-perturbation and receding-tracking method is then proposed to reliably trace the targeted interarea oscillation modes during the optimization process. The Broyden-Fletcher-GoldfarbShanno method combined with the gradient sampling technique is employed to efficiently solve the presented nonsmooth, nonconvex, and nonlinear eigenvalue optimization-based problem. The effectiveness of the proposed method is validated by the significantly improved damping of the interarea oscillation modes in the two-area four-machine test system and a real-life large DCPPS.
Let f be a continuous function on R-n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differenti...
详细信息
Let f be a continuous function on R-n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but is a function whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm. When f is locally Lipschitz and has bounded level sets, and the sampling radius epsilon is fixed, we show that, with probability 1, the algorithm generates a sequence with a cluster point that is Clarke epsilon-stationary. Furthermore, we show that if f has a unique Clarke stationary point (x) over bar, then the set of all cluster points generated by the algorithm converges to (x) over bar as epsilon is reduced to zero. Numerical results are presented demonstrating the robustness of the algorithm and its applicability in a wide variety of contexts, including cases where f is not locally Lipschitz at minimizers. We report approximate local minimizers for functions in the applications literature which have not, to our knowledge, been obtained previously. When the termination criteria of the algorithm are satisfied, a precise statement about nearness to Clarke epsilon-stationarity is available. A matlab implementation of the algorithm is posted at http://***/ overton/papers/gradsamp/alg.
Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions wh...
详细信息
Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions which is able to compute new subgradients that improve a given approximation in case a direction with insufficient descent was computed. Combined with a recently proposed deterministic gradient sampling approach, this yields a deterministic and provably convergent way to approximate subdifferentials for computing descent directions.
We give a nonderivative version of the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function f on R-n that is continuously differentiable on an open dense subset. Instead...
详细信息
We give a nonderivative version of the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function f on R-n that is continuously differentiable on an open dense subset. Instead of gradients of f, we use estimates of gradients of the Steklov averages of f (obtained by convolution with mollifiers) which require f-values only. We show that the nonderivative version retains the convergence properties of the gradient sampling algorithm. In particular, with probability 1, it either drives the f-values to -infinity or each of its cluster points is Clarke stationary for f.
In this paper, we focus on a descent algorithm for solving nonsmooth nonconvex optimization problems. The proposed method is based on the proximal bundle algorithm and the gradient sampling method and uses the advanta...
详细信息
In this paper, we focus on a descent algorithm for solving nonsmooth nonconvex optimization problems. The proposed method is based on the proximal bundle algorithm and the gradient sampling method and uses the advantages of both. In addition, this algorithm has the ability to handle inexact information, which creates additional challenges. The global convergence is proved with probability one. More precisely, every accumulation point of the sequence of serious iterates is either a stationary point if exact values of gradient are provided or an approximate stationary point if only inexact information of the function and gradient values is available. The performance of the proposed algorithm is demonstrated using some academic test problems. We further compare the new method with a general nonlinear solver and two other methods specifically designed for nonconvex nonsmooth optimization problems.
We present an algorithm for minimizing locally Lipschitz functions being continuously differentiable in an open dense subset of R-n. The function may be nonsmooth and/or nonconvex. The method makes use of a gradient s...
详细信息
We present an algorithm for minimizing locally Lipschitz functions being continuously differentiable in an open dense subset of R-n. The function may be nonsmooth and/or nonconvex. The method makes use of a gradient sampling method along with a conjugate gradient scheme. To find search directions, we make use of a sequence of positive definite approximate Hessians based on conjugate gradient matrices. The algorithm benefits from a restart procedure to improve upon poor search directions or to make sure that the approximate Hessians remain bounded. The global convergence of the algorithm is established. An implementation of the algorithm is executed on a collection of well-known test problems. Comparative numerical results clearly show outperformance of the algorithm over some recent well-known nonsmooth algorithms using the Dolan-More performance profiles.
暂无评论