This paper proposes a novel hybrid immune algorithm (HIA) that can overcome the typical drawback of the artificial immune algorithm (AIA), which runs slowly and experiences slow convergence. The HIA combines the adapt...
详细信息
This paper proposes a novel hybrid immune algorithm (HIA) that can overcome the typical drawback of the artificial immune algorithm (AIA), which runs slowly and experiences slow convergence. The HIA combines the adaptive AIA based on the steepest descent algorithm. The HIA fully displays global search ability and the global convergence of the immune algorithm. At the same time, it inserts a quasi-descent operator to strengthen its local search ability. A good convergence of the HIA with the quasi-descent idea is shown as well. Numerical experiment results show that the HIA successfully improves running speed and convergence performance. (C) 2011 Elsevier Inc. All rights reserved.
Considering orthogonal Stiefel manifolds as constraint manifolds, we give an explicit description of a set of local coordinates that also generate a basis for the tangent space in any point of the orthogonal Stiefel m...
详细信息
Considering orthogonal Stiefel manifolds as constraint manifolds, we give an explicit description of a set of local coordinates that also generate a basis for the tangent space in any point of the orthogonal Stiefel manifolds. We show how this construction depends on the choice of a submatrix of full rank. Embedding a gradient vector field on an orthogonal Stiefel manifold in the ambient space, we give explicit necessary and sufficient conditions for a critical point of a cost function defined on such manifolds. We explicitly describe the steepest descent algorithm on the orthogonal Stiefel manifold using the ambient coordinates and not the local coordinates of the manifold. We point out the dependence of the recurrence sequence that defines the algorithm on the choice of a full rank submatrix. We illustrate the algorithm in the case of Brockett cost functions.
In complex indoor propagation environment, the non-line-of-sight error caused by various obstacles brings great error to node positioning. Choosing the appropriate signal transmission methods is important to improve n...
详细信息
In complex indoor propagation environment, the non-line-of-sight error caused by various obstacles brings great error to node positioning. Choosing the appropriate signal transmission methods is important to improve node indoor positioning accuracy. In this research, ultra-wideband technology, as baseband with high theoretical positioning accuracy and real-time performance, is implemented to transmit indoor signals. The proposed fusion algorithm with ultra-wideband baseband takes advantages from both time difference of arrival and angle of arrival algorithms, combined through the steepest descent algorithm. The non-line-of-sight signal estimation error is iteratively eliminated to achieve effective positioning accuracy. The experimental results indicate that the novel time difference of arrival/angle of arrival fusion algorithm with steepest descent algorithm can largely improve node positioning accuracy and stability.
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and c...
详细信息
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and con...
详细信息
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.
Bat algorithm (BA) is a kind of heuristic algorithm imitating the echolocation behavior of bats. In consideration of BA shortcomings such as that it could easily fall into traps like local optimum, low accuracy and pr...
详细信息
ISBN:
(纸本)9783319959320;9783319959337
Bat algorithm (BA) is a kind of heuristic algorithm imitating the echolocation behavior of bats. In consideration of BA shortcomings such as that it could easily fall into traps like local optimum, low accuracy and premature convergence, a new algorithm is proposed by combining steepestdescent (SD) algorithm and bat algorithm based on their respective advantages and disadvantages so as to achieve the goal of solving systems of non-linear equations effectively. The results of simulation experiments show that this proposed algorithm (SD-BA) can help improve the accuracy of problem solving and make the optimization results more accurate, and therefore, it is a very efficient and reliable algorithm for solving systems of non-linear equations.
We analyze minimization algorithms for L-q-convex functions in discrete convex analysis and establish exact bounds for the number of iterations required by the steepest descent algorithm and its variants. (C) 2014 Els...
详细信息
We analyze minimization algorithms for L-q-convex functions in discrete convex analysis and establish exact bounds for the number of iterations required by the steepest descent algorithm and its variants. (C) 2014 Elsevier B.V. All rights reserved.
作者:
Murota, KUniv Tokyo
Grad Sch Informat Sci & Technol Tokyo 1138656 Japan JST
PRESTO Tokyo 1138656 Japan
This paper investigates the complexity of steepest descent algorithms for two classes of discrete convex functions: M-convex functions and L-convex functions. Simple tie-breaking rules yield complexity bounds that are...
详细信息
This paper investigates the complexity of steepest descent algorithms for two classes of discrete convex functions: M-convex functions and L-convex functions. Simple tie-breaking rules yield complexity bounds that are polynomials in the dimension of the variables and the size of the effective domain. Combining the present results with a standard scaling approach leads to an efficient algorithm for L-convex function minimization.
In this paper we consider the classical unconstrained nonlinear multiobjective optimization problem. For such a problem, it is particularly interesting to compute as many points as possible in an effort to approximate...
详细信息
In this paper we consider the classical unconstrained nonlinear multiobjective optimization problem. For such a problem, it is particularly interesting to compute as many points as possible in an effort to approximate the so-called Pareto front. Consequently, to solve the problem we define an "a posteriori" algorithm whose generic iterate is represented by a set of points rather than by a single one. The proposed algorithm takes advantage of a linesearch with extrapolation along steepestdescent directions with respect to (possibly not all of) the objective functions. The sequence of sets of points produced by the algorithm defines a set of "linked" sequences of points. We show that each linked sequence admits at least one limit point (not necessarily distinct from those obtained by other sequences) and that every limit point is Pareto-stationary. We also report numerical results on a collection of multiobjective problems that show efficiency of the proposed approach over more classical ones.
Different optimization approaches are explored to solve unconstrained minimization problems numerically in this paper. A classic optimization problem i.e., the Rosenbrock function is considered a benchmark to compare ...
详细信息
ISBN:
(纸本)9798350332865
Different optimization approaches are explored to solve unconstrained minimization problems numerically in this paper. A classic optimization problem i.e., the Rosenbrock function is considered a benchmark to compare the performance of multiple algorithms including steepestdescent, Newton-Raphson, and Fletcher-Reeves conjugate gradient. The performance of four methods including fixed step size, variable step size, quadratic fit, and golden section search region elimination with the steepest descent algorithm is investigated also. The required number of iterations to converge the minimum set point while solving problems is considered an evaluation metric. The converging trajectory of the algorithms is presented using level curves.
暂无评论