This paper presents a study on quadratic programming problems (QPP) and introduces a new neural network model with feasibility analysis. The stability analysis of QPP using neural network modeling is also discussed. T...
详细信息
This paper presents a study on quadratic programming problems (QPP) and introduces a new neural network model with feasibility analysis. The stability analysis of QPP using neural network modeling is also discussed. The proposed neural network model has a simple form, and its optimal feasibility for both primal and dual QP problems is established. The model demonstrates a good convergence rate with a minimal number of iterations, achieving a very fast convergence to the exact solutions of both the primal and dual QP problems. The optimal solutions for the original QP problem and its dual QPP are obtained. Finally, two simple numerical examples are simulated to illustrate the findings.
The integration of intuitionistic fuzzy theory in optimization problems has significantly enhanced the ability to handle complex, uncertain and imprecise scenarios. Despite these advancements, existing models have not...
详细信息
The integration of intuitionistic fuzzy theory in optimization problems has significantly enhanced the ability to handle complex, uncertain and imprecise scenarios. Despite these advancements, existing models have not adequately addressed the specific challenges of intuitionistic fuzzy quadratic programming problems (IFQPPs). This study fills this gap by proposing a novel recurrent neural network for IFQPPs. First of all, IFQPP is transformed into a multi-objective optimization problem using alpha, /3-cuts. This technique allows to explore a wide range of possible solutions using various combinations of alpha, /3-cuts. Next, the multi-objective problem is converted into a weighted problem coupled with duality theory to remodel into a single-layer recurrent neural network. To validate the proposed approach, theorems as well as lemmas have been constructed and proved at appropriate places. The proposed neural network model is illustrated using numerical examples to explain the methodology. Later, it is applied to a small-scale electrical power grid to demonstrate the practical utility and impact of the proposed approach.
In this paper, we present a parallel dual-type (PDT) algorithm for solving a strictly convex quadraticprogramming problem with equality and box constraints. The PDT algorithm is suitable for distributed implementatio...
详细信息
In this paper, we present a parallel dual-type (PDT) algorithm for solving a strictly convex quadraticprogramming problem with equality and box constraints. The PDT algorithm is suitable for distributed implementation and can be used as a basic optimization module for handling optimization problems of large distributed systems. Besides, combining the proposed algorithm with a successive quadraticprogramming (SQP) method, we can solve constrained nonlinear programmingproblems such as power-system state estimation with power-flow balance constraints on no generation and no-load buses. We have demonstrated the computational efficiency of our method, by comparing with the benchmark commercial NCONF and QPROG routines and the state-of-the-art parallel algorithm through the implementation in the sequential version of Sparc workstation and the parallel version of PC network in solving constrained state estimation problems within IEEE 30-bus and IEEE 118-bus systems. (C) 2008 Elsevier Ltd. All rights reserved.
In this paper the numerical stability of the orthogonal factorization method [5] for linear equality-constrained quadratic programming problems is studied using a backward error analysis. A perturbation formula for th...
详细信息
In this paper the numerical stability of the orthogonal factorization method [5] for linear equality-constrained quadratic programming problems is studied using a backward error analysis. A perturbation formula for the problem is analyzed;the condition numbers of this formula are examined in order to compare them with the condition numbers of the two matrices of the problem. A class of test problems is also considered in order to show experimentally the behaviour of the method.
In this paper, a neural network for quadratic programming problems is simplified. The simplicity is necessary for the high accuracy of solutions and low cost of implementation. The proposed network is proved to be an ...
详细信息
In this paper, a neural network for quadratic programming problems is simplified. The simplicity is necessary for the high accuracy of solutions and low cost of implementation. The proposed network is proved to be an extension of Newton's optimal descent flow about constraints problems and is globally convergent. The network dynamic behaviors are also discussed and these can get the feasible solution more easily. The simulations demonstrate the reasonability of the theory and advantages of the network. (C) 2001 Elsevier Science Inc. All rights reserved.
By selecting an appropriate transformation of the variables in quadratic programming problems with equality constraints, a lower order recurrent neural network for solving higher quadraticprogramming is presented. Th...
详细信息
ISBN:
(纸本)9783642015120
By selecting an appropriate transformation of the variables in quadratic programming problems with equality constraints, a lower order recurrent neural network for solving higher quadraticprogramming is presented. The proposed recurrent neural network is globally exponential stability and converges to the optimal solutions of the higher quadraticprogramming, An op-amp based on the, analogue circuit, realization of the recurrent neural network is described. The recurrent neural network proposed in the paper is simple in structure, and is more stable and more accuracy for solving the higher quadraticprogramming than some existed conclusion, especially for the case that the number of decision variables is close to the number of the constraints. An illustrative example is discussed to show us how to design the analogue neural network using the steps proposed in this paper.
quadratic programming problems are widespread, class of nonlinear programmingproblems with many practical applications. The case of inequality constraints have been considered in a previous author's paper. In thi...
详细信息
ISBN:
(纸本)0780385470
quadratic programming problems are widespread, class of nonlinear programmingproblems with many practical applications. The case of inequality constraints have been considered in a previous author's paper. In this contribution an extension of these results for the case of inequality and equality constraints is presented. Based on equivalent formulation of Kuhn-Tucker conditions, a new neural network for solving the general quadratic programming problems, for the case of both inequality and equality constraints, is proposed. Two theorems for global stability and convergence of this network are given as well. The presented network has lower complexity for implementations and the. examples confirm its effectiveness. Simulation results based on SIMULINK (R) models are given and compared.
In recent years, multi-task learning (MTL) has become a popular field in machine learning and has a key role in various domains. Sharing knowledge across tasks in MTL can improve the performance of learning algorithms...
详细信息
In recent years, multi-task learning (MTL) has become a popular field in machine learning and has a key role in various domains. Sharing knowledge across tasks in MTL can improve the performance of learning algorithms and enhance their generalization capability. A new approach called the multi-task least squares twin support vector machine (MTLS-TSVM) was recently proposed as a least squares variant of the direct multi-task twin support vector machine (DMTSVM). Unlike DMTSVM, which solves two quadratic programming problems, MTLS-TSVM solves two linear systems of equations, resulting in a reduced computational time. In this paper, we propose an enhanced version of MTLS-TSVM called the improved multi-task least squares twin support vector machine (IMTLS-TSVM). IMTLS-TSVM offers a significant advantage over MTLS-TSVM by operating based on the empirical risk minimization principle, which allows for better generalization performance. The model achieves this by including regularization terms in its objective function, which helps control the model's complexity and prevent overfitting. We demonstrate the effectiveness of IMTLS-TSVM by comparing it to several single-task and multi-task learning algorithms on various real-world data sets. Our results highlight the superior performance of IMTLS-TSVM in addressing multi-task learning problems.
Interior point methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior po...
详细信息
Interior point methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior point iteration. If, due to problem's inner structure, there are special techniques for efficiently solving linear systems, IPMs demonstrate a reduced computing time and are able to solve large scale optimization problems. It is tempting to try to replace the Newton method by quasi-Newton methods. Quasi-Newton approaches to IPMs either are built to approximate the Lagrangian function for nonlinear programmingproblems or provide an inexpensive preconditioner. In this work we study the impact of using quasi-Newton methods applied directly to the nonlinear system of equations for general quadratic programming problems. The cost of each iteration can be compared to the cost of computing correctors in a usual interior point iteration. Numerical experiments show that the new approach is able to reduce the overall number of matrix factorizations.
Problem of solving the strictly convex, quadraticprogramming problem is studied. The idea of conjugate directions is used. First we assume that we know the set of directions conjugate with respect to the hessian of t...
详细信息
Problem of solving the strictly convex, quadraticprogramming problem is studied. The idea of conjugate directions is used. First we assume that we know the set of directions conjugate with respect to the hessian of the goal function. We apply n simultaneous directional minimizations along these conjugate directions starting from the same point followed by the addition of the directional corrections. Theorem justifying that the algorithm finds the global minimum of the quadratic goal function is proved. The way of effective construction of the required set of conjugate directions is presented. We start with a vector with zero value entries except the first one. At each step new vector conjugate to the previously generated is constructed whose number of nonzero entries is larger by one than in its predecessor. Conjugate directions obtained by means of the above construction procedure with appropriately selected parameters form an upper triangular matrix which in exact computations is the Cholesky factor of the inverse of the hessian matrix. Computational cost of calculating the inverse factorization is comparable with the cost of the Cholesky factorization of the original second derivative matrix. Calculation of those vectors involves exclusively matrix/vector multiplication and finding an inverse of a diagonal matrix. Some preliminary computational results on some test problems are reported. In the test problems all symmetric, positive definite matrices with dimensions from to from the repository of the Florida University were used as the hessians.
暂无评论