Discrete time-varying linear system (LS) is a fundamental topic in science and engineering. However, conventional methods essentially designed for time-invariant LS generally assume that LS is time-invariant during a ...
详细信息
Discrete time-varying linear system (LS) is a fundamental topic in science and engineering. However, conventional methods essentially designed for time-invariant LS generally assume that LS is time-invariant during a small time interval (i.e., sampling gap) for solving time-varying LS. This assumption quite limits their precision because of the existing of lagging errors. Discarding this assumption, Zhang neural dynamics (ZND) method improves the precision for LS solving, which is a great alternative for the solving of discrete time-varying problems. Note that precision solutions to discrete time-varying problems depend on discretization formulas. In this paper, we propose a new ZND model to solve the discrete time-varying LS. The discrete time-varying division is a special case of discrete time-varying LS with the solution being a scalar while it is usually studied alone. Considering the above inner connection, we further propose a special model for solving the discrete time-varying division. Moreover, as an application of discrete time-varying LS, the discrete time-varying quadratic programming (QP) subject to LS is also studied. The convergence and precision of proposed models are guaranteed by theoretical analyses and substantiated by numerous numerical experiments. (C) 2018 Elsevier B.V. All rights reserved.
When we use linear programming in possibilistic regression analysis, some coefficients tend to become crisp because of the characteristic of linear programming. On the other hand, a quadratic programming approach give...
详细信息
When we use linear programming in possibilistic regression analysis, some coefficients tend to become crisp because of the characteristic of linear programming. On the other hand, a quadratic programming approach gives more diverse spread coefficients than a linear programming one. Therefore, to overcome the crisp characteristic of linear programming, we propose interval regression analysis based on a quadratic programming approach. Another advantage of adopting a quadratic programming approach in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property in fuzzy regression, By changing the weights of the quadratic function, we can analyze the given data from different viewpoints. For data with crisp inputs and interval outputs, the possibility and necessity models can be considered. Therefore, the unified quadratic programming approach obtaining the possibility and necessity regression models simultaneously is proposed. Even though there always exist possibility estimation models, the existence of necessity estimation models is not guaranteed if we fail to assume a proper function fitting to the given data as a regression model. Thus, we consider polynomials as regression models since any curve can be represented by the polynomial approximation, Using polynomials, we discuss how to obtain approximation models which fit well to the given data where the measure of fitness is newly defined to gauge the similarity between the possibility and the necessity models. Furthermore, from the obtained possibility and necessity regression models, a trapezoidal fuzzy output can be constructed.
In this article, to solve a time-varying quadratic programming with equation constraint, a new time-specified zeroing neural network (TSZNN) is proposed and analyzed. Unlike the existing methods such as the Zhang neur...
详细信息
In this article, to solve a time-varying quadratic programming with equation constraint, a new time-specified zeroing neural network (TSZNN) is proposed and analyzed. Unlike the existing methods such as the Zhang neural network with different activation functions and a finite-time neural network, the TSZNN model is incorporated into a terminal attractor, and the convergent error can be guaranteed to reduce to zero in advance (instead of the finite-time property). The main advantage of the TSZNN model is that it is independent of the initial state of the systematic dynamics, which is much astonishing to the finite convergence relying on the initial conditions and comprehensively modifies the convergent performance. Mathematical analyses substantiate the prespecified convergence of the TSZNN model and high convergent precision under the situation of various convergent time settings. The prespecified convergence of the TSZNN model for a quadratic programming problem has been mathematically proved under different convergent constant settings. In addition, simulation applications conducted on a repeatable trajectory planning of the redundant manipulator are studied to demonstrate the validity of the proposed TSZNN model.
In this paper a special piecewise linear system is studied. It is shown that, under a mild assumption, the semi-smooth Newton method applied to this system is well defined and the method generates a sequence that conv...
详细信息
In this paper a special piecewise linear system is studied. It is shown that, under a mild assumption, the semi-smooth Newton method applied to this system is well defined and the method generates a sequence that converges linearly to a solution. Besides, we also show that the generated sequence is bounded, for any starting point, and a formula for any accumulation point of this sequence is presented. As an application, we study the convex quadratic programming problem under positive constraints. The numerical results suggest that the semi-smooth Newton method achieves accurate solutions to large scale problems in few iterations. (C) 2016 Elsevier B.V. All rights reserved.
In this paper we consider two recurrent neural network model for solving linear and quadratic programming problems. The first model is derived from an unconstraint minimization reformulation of the program. The second...
详细信息
In this paper we consider two recurrent neural network model for solving linear and quadratic programming problems. The first model is derived from an unconstraint minimization reformulation of the program. The second model directly is obtained of optimality condition for an optimization problem. By applying the energy function and the duality gap, we will compare the convergence these models. We also explore the existence and the convergence of the trajectory and stability properties for the neural networks models. Finally, in some numerical examples, the effectiveness of the methods is shown. (c) 2005 Elsevier Inc. All rights reserved.
This paper considers solving convex quadratic programs in a real-time setting using a regularized and smoothed Fischer-Burmeister method (FBRS). The Fischer-Burmeister function is used to map the optimality conditions...
详细信息
This paper considers solving convex quadratic programs in a real-time setting using a regularized and smoothed Fischer-Burmeister method (FBRS). The Fischer-Burmeister function is used to map the optimality conditions of a quadratic program to a nonlinear system of equations which is solved using Newton's method. Regularization and smoothing are applied to improve the practical performance of the algorithm and a merit function is used to globalize convergence. FBRS is simple to code, easy to warmstart, robust to early termination, and has attractive theoretical properties. making it appealing for real time and embedded applications. Numerical experiments using several predictive control examples show that the proposed method is competitive with other state-of-the-art solvers.
In this paper we present a new placement method for cell-based layout styles. It is composed of alternating and interacting global optimization and partitioning steps that are followed by an optimization of the area u...
详细信息
In this paper we present a new placement method for cell-based layout styles. It is composed of alternating and interacting global optimization and partitioning steps that are followed by an optimization of the area utilization. Methods using the divide-and-conquer paradigm usually lose the global view by generating smaller and smaller subproblems. In contrast, GORDIAN maintains the simultaneous treatment of all cells over all global optimization steps, thereby considering constraints that reflect the current dissection of the circuit. The global optimizations are performed by solving quadratic programming problems that possess unique global minima. Improved partitioning schemes for the stepwise refinement of the placement are introduced. The area utilization is optimized by an exhaustive slicing procedure. The placement method has been applied to real world problems and excellent results in terms of both placement quality and computation time have been obtained.
The optimization of a normal weight concrete mixture proportions for determination of the desired concrete quality is an important issue for concrete manufacturers and users. In this study, the constrained quadratic p...
详细信息
The optimization of a normal weight concrete mixture proportions for determination of the desired concrete quality is an important issue for concrete manufacturers and users. In this study, the constrained quadratic programming methodology is based on meta-models developed by using response surface methodology for determining optimal normal weight concrete mixture proportions. We developed a graphical user interface to take the burden of numerous experiments and complex mathematical calculations away from laboratory experts and manufacturers. However, we developed a graphical user interface based on MATLAB (R) toolbox which allows performing optimization of normal weight concrete mixture proportions interactively. The GUI was tested through real case studies and satisfactorily results were obtained. The results showed that developed graphical user interface was functional, effective and flexible in solving the optimization problems of normal weight concrete mixture proportions. (C) 2014 Elsevier B.V. All rights reserved.
The quadratic programming has been widely applied to solve real world problems. The quadratic functions are often applied in the inventory management, portfolio selection, engineering design, molecular study, and econ...
详细信息
The quadratic programming has been widely applied to solve real world problems. The quadratic functions are often applied in the inventory management, portfolio selection, engineering design, molecular study, and economics, etc. Fuzzy relation inequalities (FRI) are important elements of fuzzy mathematics, and they have recently been widely applied in the fuzzy comprehensive evaluation and cybernetics. In view of the importance of quadratic functions and FRI, we present a fuzzy relation quadratic programming model with a quadratic objective function subject to the max-product fuzzy relation inequality constraints. Some sufficient conditions are presented to determine its optimal solution in terms of the maximum solution or the minimal solutions of its feasible domain. Also, some simplification operations have been given to accelerate the resolution of the problem by removing the components having no effect on the solution process. The simplified problem can be converted into a traditional quadratic programming problem. An algorithm is also proposed to solve it. Finally, some numerical examples are given to illustrate the steps of the algorithm. (C) 2011 Elsevier Ltd. All rights reserved.
Recent advances in neural-network architecture allow for seamless integration of convex optimization problems as differentiable layers in an end-to-end trainable neural network. Integrating medium and large scale quad...
详细信息
Recent advances in neural-network architecture allow for seamless integration of convex optimization problems as differentiable layers in an end-to-end trainable neural network. Integrating medium and large scale quadratic programs into a deep neural network architecture, however, is challenging as solving quadratic programs exactly by interior-point methods has worst-case cubic complexity in the number of variables. In this paper, we present an alternative network layer architecture based on the alternating direction method of multipliers (ADMM) that is capable of scaling to moderate sized problems with 100-1000 decision variables and thousands of training examples. Backward differentiation is performed by implicit differentiation of a customized fixed-point iteration. Simulated results demonstrate the computational advantage of the ADMM layer, which for medium scale problems is approximately an order of magnitude faster than the state-of-the-art layers. Furthermore, our novel backward-pass routine is computationally efficient in comparison to the standard approach based on unrolled differentiation or implicit differentiation of the KKT optimality conditions. We conclude with examples from portfolio optimization in the integrated prediction and optimization paradigm.
暂无评论