The study and application of contemporary optimization techniques considerably enhance the efficiency of chemical research and manufacturing. With the dynamic progression of modern manufacturing technologies, the emer...
详细信息
The study and application of contemporary optimization techniques considerably enhance the efficiency of chemical research and manufacturing. With the dynamic progression of modern manufacturing technologies, the emergence of numerous black-box models characterized by inaccessible mathematical formulations and high evaluation costs poses new challenges to traditional optimization methods, leading to difficulty in programming and solving. Hence, based on the trust region filter (TRF) method, we define a new framework to elevate optimization efficiency in this study for chemical systems involving computationally expensive black-box functions. Sampling size per iteration is reduced, and sampling efficiency is improved by incorporating the known data beyond the trust region to assist in constructing reduced models, which is achieved by the application of the Gaussian process. Through comparison and validation of benchmark tests and case studies, this approach demonstrates that using the Gaussian process as the reduced model can lower the number of calls to black-box functions by more than half compared to common linear and quadratic models, and convergence to first-order critical points can still be guaranteed.
Model -based optimal experimental design (OED) is a well known tool for efficient model development. However, it is not used very often. A few reasons for that are: a lack of understanding on how to work with complex ...
详细信息
Model -based optimal experimental design (OED) is a well known tool for efficient model development. However, it is not used very often. A few reasons for that are: a lack of understanding on how to work with complex OED methods and a small amount of ready -to -use tools available to directly apply OED methods. In the presented contribution OED and sampling strategies are used to categorize OED formulations as nonlinear programs. Different strategies and their combination are analyzed based on performance and robustness. Depending on the availability of measurements, control flexibility of the experimental setup, and model accuracy some strategies are more efficient than others. Based on the proposed guidelines, engineers will have a better understanding about which NLP formulation should be used for their specific task. The methods described are available to the community as a part of open -source code developed in Python.
Conjugate gradient minimization methods (CGM) and their accelerated variants are widely used. We focus on the use of cubic regularization to improve the CGM direction independent of the step length computation. In thi...
详细信息
Conjugate gradient minimization methods (CGM) and their accelerated variants are widely used. We focus on the use of cubic regularization to improve the CGM direction independent of the step length computation. In this paper, we propose the Hybrid Cubic Regularization of CGM, where regularized steps are used selectively. Using Shanno's reformulation of CGM as a memoryless BFGS method, we derive new formulas for the regularized step direction. We show that the regularized step direction uses the same order of computational burden per iteration as its non-regularized version. Moreover, the Hybrid Cubic Regularization of CGM exhibits global convergence with fewer assumptions. In numerical experiments, the new step directions are shown to require fewer iteration counts, improve runtime, and reduce the need to reset the step direction. Overall, the Hybrid Cubic Regularization of CGM exhibits the same memoryless and matrix-free properties, while outperforming CGM as a memoryless BFGS method in iterations and runtime.
This study presents an advanced methodology for maximizing the performance of Thyristor-Controlled Phase Shifters (TCPS) in order to enhance power transfer capacity within transmission networks. As a flexible AC trans...
详细信息
This study presents an advanced methodology for maximizing the performance of Thyristor-Controlled Phase Shifters (TCPS) in order to enhance power transfer capacity within transmission networks. As a flexible AC transmission system (FACTS) device, TCPS plays a crucial role in providing dynamic control over bus voltage and angle, thereby improving the power system overall operational efficiency. The proposed approach utilizes a nonlinear programming (NLP) framework integrated with a Genetic Algorithm (GA) to achieve maximum active forward power flow in transmission lines. A voltage source converter (VSC)-based TCPS is strategically placed between the swing and load buses in a three-bus power system. The operating range under the forward power flow condition is determined by analyzing voltage and line current data obtained through varied phase-shifting configurations of the TCPS. The optimal phase shift value is determined through GA-based convergence tracking and NLP optimization, leading to the maximization of forward power flow between the two buses. The simulation results demonstrate its effectiveness in successfully tracking and converging to the maximum power flow (MPF) point within 40 iterations. Moreover, the convergence efficiency remains consistently at 100% over 10 consecutive runtimes, while the computational time for the GA component is less than 20 seconds. The outcomes of this study highlight the novelty and superiority of the proposed methodology compared to existing approaches in the literature. The findings have significant implications for enhancing the reliability and stability of power systems in the face of increasing demand and complex grid conditions.
The need to find solutions in function form that optimize a given nonlinear cost functional arises routinely in many important areas of operations research and applied mathematics. In most practical cases, problems of...
详细信息
The need to find solutions in function form that optimize a given nonlinear cost functional arises routinely in many important areas of operations research and applied mathematics. In most practical cases, problems of this kind require a numerical solution based on some suitable class of approximating architectures. This paper introduces the use of binary Voronoi linear trees (BVLTs) for the approximate solution of a general class of functional optimization problems. The main features of the considered trees are (i) a splitting scheme based on a Voronoi bisection criterion and (ii) linear outputs in the leaves, which make the resulting models more flexible compared to classic trees with cuts parallel to the axes and constant outputs. At the same time, due to the binary recursive structure, BVLTs retain the well-known efficiency of decision tree architectures. Consistently with the typical tree construction framework, we provide a greedy algorithm for the approximate solution of the addressed functional optimization problem. Universal approximation capabilities of the proposed class of models are derived in the theoretical analysis, and the consistency of the solution is discussed as well. In order to improve accuracy and robustness, we also consider the use of BVLTs in ensemble fashion, through an aggregation scheme well suited to optimization purposes. Simulation tests involving various optimization problems are presented, showing how the proposed algorithm can cope well in complex multivariate contexts, especially in ensemble form.
The main goal of this article is to determine the optimally weighted coefficients (omega 1and omega 2) of the balanced loss function of the form. L Kappa,omega , xi o (psi(sigma), xi) = omega 1 gamma(sigma)Kappa(xi o,...
详细信息
The main goal of this article is to determine the optimally weighted coefficients (omega 1and omega 2) of the balanced loss function of the form. L Kappa,omega , xi o (psi(sigma), xi) = omega 1 gamma(sigma)Kappa(xi o, xi) + omega 2 gamma(sigma) Kappa(psi(sigma), xi);omega 1 + omega 2 = 1 . Based on Type II Censored Data, by applying non-linear programming to estimate the shape parameter and some survival time characteristics, such as reliability and hazard functions of the Pareto distribution. Considering two balanced loss functions (BLF), including balanced square error loss function (BSELF) and balanced linear exponential loss function (BLLF), the calculation is based on the balanced loss function, including symmetric and asymmetric loss functions, as a special case. Use Monte Carlo simulation to pass Bayesian and maximum likelihood estimators through. The results of the simulation showed that the proposed model BLLF has the best performance. Moreover, the simulation verified that the balanced loss functions are always better than the corresponding loss function.
Recently the capacity of installed wind energy has continued to expand, necessitating that power companies develop specified low-voltage ride-through (LVRT) curves to address unexpected power outages caused by wind fa...
详细信息
Recently the capacity of installed wind energy has continued to expand, necessitating that power companies develop specified low-voltage ride-through (LVRT) curves to address unexpected power outages caused by wind farms. To date, the literature lacks reports on the specification of LVRT curves, and the state-operated Taiwan Power Company (Taipower) lacks established guidelines for revising the currently utilized LVRT curves. This study aims to specify LVRT curves based on a projected power load for 2025, as forecasted by Taipower. Simulations were conducted using the Power System Simulator for Engineering (PSSE) equipped with a GEWT 4.0 MW wind turbine module. An objective function was defined to minimize the manufacturing costs of wind turbines while ensuring stability and incorporating the critical clearing time (CCT) and other conditions as constraints. For each of the five scenarios, including 69, 69-161, and 161 kV cases, a three-phase short-circuit fault at a point of common coupling (PCC) was simulated as a worst-case scenario to determine an appropriate LVRT curve. The CCT emerged as a pivotal parameter in the LVRT specification process, which also considers additional factors such as transmission line voltage, voltage sag, and the duration and amplitude of fault recovery oscillations following the sag.
In this brief, we propose a sequential convex programming (SCP) framework for minimizing the terminal state dispersion of a stochastic dynamical system about a prescribed destination-an important property in high-risk...
详细信息
In this brief, we propose a sequential convex programming (SCP) framework for minimizing the terminal state dispersion of a stochastic dynamical system about a prescribed destination-an important property in high-risk contexts such as spacecraft landing. Our proposed approach seeks to minimize the conditional value-at-risk (CVaR) of the dispersion, thereby shifting the probability distribution away from the tails. This approach provides an optimization framework that is not overly conservative and can accurately capture more information about true distribution, compared with methods which consider only the expected value, or robust optimization methods. The main contribution of this brief is to present an approach that: 1) establishes an optimization problem with CVaR dispersion cost 2) approximated with one of the two novel surrogates which is then 3) solved using an efficient SCP algorithm. In 2), two approximation methods, a sampling approximation (SA) and a symmetric polytopic approximation (SPA), are introduced for transforming the stochastic objective function into a deterministic form. The accuracy of the SA increases with sample size at the cost of problem size and computation time. To overcome this, we introduce the SPA, which avoids sampling by using an alternative approximation and thus offers significant computational benefits. Monte Carlo simulations indicate that our proposed approaches minimize the CVaR of the dispersion successfully.
The paper presents the development of algorithms for mass and energy constrained neural network models that can exactly conserve the overall mass and energy of distributed chemical process systems, even though the noi...
详细信息
The paper presents the development of algorithms for mass and energy constrained neural network models that can exactly conserve the overall mass and energy of distributed chemical process systems, even though the noisy transient data used for optimal model training violate the same. In contrast to approximately satisfying mass and energy balance constraints of a system by soft penalization of objective function, algorithms have been developed for solving equality-constrained nonlinear optimization problems, thus providing the guarantee of exactly satisfying the system mass and energy conservation laws. For developing dynamic mass-energy constrained network models for distributed systems, hybrid series and parallel dynamic-static neural networks have been leveraged. The developed algorithms for solving both the training and forward problems are validated using both steady-state and dynamic data in the presence of various noise characteristics. The developed data-driven algorithms are flexible to exactly satisfy mass and energy balance constraints for dynamic chemical processes if the system holdup information is available. The proposed network structures and algorithms are applied to the development of data-driven lumped and distributed models of an adiabatic superheater/reheater system, a nonisothermal continuous stirred tank reactor, as well as an electrically heated plug-flow reactor system where one form of energy gets transformed to another. It has been observed that the mass-energy constrained neural networks yield a root mean squared error of <1% with respect to the system truth for the case studies evaluated in this work.
Transmission network expansion planning is a critical and complex problem related to the operation and development of electrical power systems. It is typically formulated as a mixed-integer nonlinear programming (MINL...
详细信息
Transmission network expansion planning is a critical and complex problem related to the operation and development of electrical power systems. It is typically formulated as a mixed-integer nonlinear programming (MINLP) problem with combinatorial characteristics. Various mathematical models have been proposed to better approximate real-world system behavior, but even the most relaxed formulations remain computationally challenging. This paper introduces a search space reduction strategy to reduce the gap between the optimal solution of the MINLP model and its relaxed counterpart by strategically considering surrogate constraints. This approach enhances computational efficiency, significantly reducing processing time when using an optimization solver. By applying this method, we successfully determined the previously unknown optimal solution for the Brazilian north-northeast system.
暂无评论