Compared to normal learning algorithms, for example backpropagation, the optimal bounded ellipsoid (OBE) algorithm has some better properties, such as faster convergence, since it has a similar structure as Kalman fil...
详细信息
Compared to normal learning algorithms, for example backpropagation, the optimal bounded ellipsoid (OBE) algorithm has some better properties, such as faster convergence, since it has a similar structure as Kalman filter. OBE has some advantages over Kalman filter training, the noise is not required to be Guassian. In this paper OBE algorithm is applied in training the weights of the feedforward neural network for nonlinear system identification. Both hidden layers and output layers can be updated. From a dynamic system point of view, such training can be useful for all neural network applications requiring real-time updating of the weights. Two simulations give the effectiveness of the suggested algorithm.
We give some modifications of the ellipsoid algorithm for linear programming and describe a numerically stable implementation. We are concerned with practical problems where user-supplied bounds can usually be provide...
详细信息
We give some modifications of the ellipsoid algorithm for linear programming and describe a numerically stable implementation. We are concerned with practical problems where user-supplied bounds can usually be provided. Our implementation allows constraint dropping and updates bounds on the optimal value, and should be able to terminate with an indication of infeasibility or with a provably good feasible solution in a moderate number of iterations.
The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P) : A(T)x = 0. This paper develops an oblivious ellipsoid algorithm (OEA) that either...
详细信息
The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P) : A(T)x <= u when its set of solutions has positive volume. However, when (P) is infeasible, the ellipsoid algorithm has no mechanism for proving that (P) is infeasible. This is in contrast to the other two fundamental algorithms for tackling (P), namely, the simplex and interior-point methods, each of which can be easily implemented in a way that either produces a solution of (P) or proves that (P) is infeasible by producing a solution to the alternative system (Alt) : A lambda = 0, u(T)lambda < 0, lambda >= 0. This paper develops an oblivious ellipsoid algorithm (OEA) that either produces a solution of (P) or produces a solution of (Alt). Depending on the dimensions and other condition measures, the computational complexity of the basic OEA may be worse than, the same as, or better than that of the standard ellipsoid algorithm. We also present two modified versions of OEA, whose computational complexity is superior to that of OEA when n MUCH LESS-THAN m. This is achieved in the first modified version by proving infeasibility without producing a solution of (Alt), and in the second version by using more memory.
An ellipsoid algorithm for convex, quadratical programming is given, based on the use of the objective function itself for ellipsoidal iterations. An implemented version is presented and numerical results are discusse...
详细信息
Newly proposed ellipsoid algorithms provide considerable hope for advancing the relevance of mathematical programming applications. They are extremely suitalbe for multiobjective programming, goal programming and inte...
详细信息
Newly proposed ellipsoid algorithms provide considerable hope for advancing the relevance of mathematical programming applications. They are extremely suitalbe for multiobjective programming, goal programming and interactive programming methodologies. More intimate cooperation between the decision maker and a model of his problem is likely to be enhanced. In this short article the basic ideas of ellipsoid algorithms and views of their potential applicability are communicated.
A variant of the ellipsoid method for nonlinear programming is introduced to enhance the speed of convergence. This variant is based on a new simple scheme to reduce the ellipsoid volume by using two center cuts gener...
详细信息
A variant of the ellipsoid method for nonlinear programming is introduced to enhance the speed of convergence. This variant is based on a new simple scheme to reduce the ellipsoid volume by using two center cuts generated in two consecutive iterations of the ellipsoid method. Computational tests show a significant improvement in computational efficiency. The tests show that the improvement is more significant for larger-size problems.
This paper presents a bundle method of descent for minimizing a convex (possibly nonsmooth) function f of several variables. At each iteration the algorithm finds a trial point by minimizing a polyhedral model of f su...
详细信息
This paper presents a bundle method of descent for minimizing a convex (possibly nonsmooth) function f of several variables. At each iteration the algorithm finds a trial point by minimizing a polyhedral model of f subject to an ellipsoid trust region constraint. The quadratic matrix of the constraint, which is updated as in the ellipsoid method, is intended to serve as a generalized “Hessian” to account for “second-order” effects, thus enabling faster convergence. The interpretation of generalized Hessians is largely heuristic, since so far this notion has been made precise by J. L. Goffin only in the solution of linear inequalities. Global convergence of the method is established and numerical results are given.
Consider the supposedly simple problem of computing a point in a convex set that is conveyed by a separation oracle with no further information (e. g., no domain ball containing or intersecting the set, etc.). The aut...
详细信息
Consider the supposedly simple problem of computing a point in a convex set that is conveyed by a separation oracle with no further information (e. g., no domain ball containing or intersecting the set, etc.). The authors' interest in this problem stems from fundamental issues involving the interplay of (i) the computational complexity of computing a point in the set, (ii) the geometry of the set, and (iii) the stability or conditioning of the set under perturbation. Under suitable definitions of these terms, the authors show herein that problem instances with favorable geometry have favorable computational complexity, validating conventional wisdom. The authors also show a converse of this implication by showing that there exist problem instances that require more computational effort to solve in certain families characterized by unfavorable geometry. This in turn leads, under certain assumptions, to a form of equivalence among computational complexity, geometry, and the conditioning of the set. The authors' measures of the geometry, relative to a given reference point, are based on the radius of a certain domain ball whose intersection with the set contains a certain inscribed ball. The geometry of the set is then measured by the radius of the domain ball, the radius of the inscribed ball, and the ratio between these two radii, the latter of which is called the aspect ratio. The aspect ratio arises in the analysis of many algorithms for convex problems, and its importance in convex algorithm analysis has been well-known for several decades. However, the presence in our bound of terms involving only the radius of the domain ball and only the radius of the inscribed ball are a bit counterintuitive;nevertheless, we show that the computational complexity must involve these terms in addition to the aspect ratio, even when the aspect ratio itself is small. This lower-bound complexity analysis relies on simple features of the separation oracle model;if we instead assume
In this paper, a class of min-max continuous location problems is discussed. After giving a complete characterization of the stationary points, we propose a simple central and deep-cut ellipsoid algorithm to solve the...
详细信息
In this paper, a class of min-max continuous location problems is discussed. After giving a complete characterization of the stationary points, we propose a simple central and deep-cut ellipsoid algorithm to solve these problems for the quasiconvex case. Moreover, an elementary convergence proof of this algorithm and some computational results are presented.
We describe a simplified and strengthened version of Vaidya's volumetric cutting plane method for finding a point in a convex set C subset of R(n). At each step the algorithm has a system of linear inequality cons...
详细信息
We describe a simplified and strengthened version of Vaidya's volumetric cutting plane method for finding a point in a convex set C subset of R(n). At each step the algorithm has a system of linear inequality constraints which defines a polyhedron P superset of C, and an interior point x is an element of P. The algorithm then either drops one constraint, or calls an oracle to check if x is an element of C, and, if not, obtain a new constraint that separates x from C. Following the addition or deletion of a constraint, the algorithm takes a small number of Newton steps for the volumetric barrier V(.). Progress of the algorithm is measured in terms of changes in V(.). The algorithm is terminated when either it is discovered that x is an element of C, or V(.) becomes large enough to demonstrate that the volume of C must be below some prescribed amount. The complexity of the algorithm compares favorably with that of the ellipsoid method, especially in terms of the number of calls to the separation oracle. Compared to Vaidya's original analysis, we decrease the total number of Newton steps required for termination by a factor of about 1.3 million, white at the same time decreasing the maximum number of constraints used to define P from 10(7)n to 200n.
暂无评论