A very brief introduction to tropical and idempotent mathematics is presented. Tropical mathematics can be treated as a result of a dequantization of the traditional mathematics as the Planck constant tends to zero ta...
详细信息
A very brief introduction to tropical and idempotent mathematics is presented. Tropical mathematics can be treated as a result of a dequantization of the traditional mathematics as the Planck constant tends to zero taking imaginary values. In the framework of idempotent mathematics, constructions and algorithms are usually simpler than their traditional analogs. We examine in particular algorithms of tropical/idempotent mathematics generated by a collection of basic semiring (or semifield) operations and other "good" operations. Every algorithm of this type has an interval version. The complexity of this interval version coincides with the complexity of the initial algorithm. The interval version of an algorithm of this type gives exact interval estimates for the corresponding output data. algorithms of linear algebra over idempotent semirings are examined. In this case, basic algorithms, like their interval versions, are polynomial. This situation is very different from the traditional linear algebra case, where basic algorithms are polynomial but the corresponding interval versions are NP-hard and interval estimates are not exact. (C) 2012 Elsevier Ltd. All rights reserved.
algorithms implementing intersection, union, and difference in table algebras are investigated. Modifications of the most widespread algorithms reducing the amount of computations are proposed. Based on the complexiti...
详细信息
algorithms implementing intersection, union, and difference in table algebras are investigated. Modifications of the most widespread algorithms reducing the amount of computations are proposed. Based on the complexities evaluated in the worst case and on the average for the modified algorithms, the fastest algorithm is found for each operation. A program that experimentally confirms theoretical estimates is developed.
In this paper, we consider the heterogeneous rooted tree cover (HRTC) problem, which further generalizes the rooted tree cover problem. Specifically, given a complete graph G = (V, E;w, f;r) and k construction teams, ...
详细信息
In this paper, we consider the heterogeneous rooted tree cover (HRTC) problem, which further generalizes the rooted tree cover problem. Specifically, given a complete graph G = (V, E;w, f;r) and k construction teams, having nonuniform construction speeds lambda(1), lambda(2), ... , lambda(k), where r is an element of V is a fixed common root, w : E -> R+ is an edge-weight function, satisfying the triangle inequality, and f: V -> R-0(+) (i.e., R+ boolean OR complexity) is a vertex-weight function with f (r) = 0, we are asked to find k trees for these k construction teams, each tree having the same root r, and collectively covering all vertices in V, the objective is to minimize the maximum completion time of k construction teams, where the completion time of each team is the total construction weight of its related tree divided by its construction speed. In addition, substituting k paths for k trees in the HRTC problem, we also consider the heterogeneous rooted path cover (HRPC) problem. Our main contributions are as follows. (1) Given any small constant delta > 0, we first design a 58.3286(1 + delta)-approximation algorithm to solve the HRTC problem, and this algorithm runs in time O(n(2)(n + log n/delta) +log(w (E) + f (V))). Meanwhile, we present a simple 116.6572(1 + delta)-approximation algorithm to solve the HRPC problem, whose time complexity is the same as the preceding algorithm. (2) We provide a max{2 rho, 2 + rho - 2/k}-approximation algorithm to resolve the HRTC problem, and that algorithm runs in time O(n(2)), where rho is the ratio of the largest team speed to the smallest one. At the same time, we can prove that the preceding max{2 rho, 2 + rho - 2/k}-approximation algorithm also resolves the HRPC problem.
Full Reversal and Partial Reversal are two well-known routing algorithms that were introduced by Gafni and Bertsekas [IEEE Trans. Commun., 29 (1981), pp. 11-18]. By reversing the directions of some links of the graph,...
详细信息
Full Reversal and Partial Reversal are two well-known routing algorithms that were introduced by Gafni and Bertsekas [IEEE Trans. Commun., 29 (1981), pp. 11-18]. By reversing the directions of some links of the graph, these algorithms transform a connected input DAG (directed acyclic graph) into an output DAG in which each node has at least one path to a distinguished destination node. We present a generalization of these algorithms, called the link reversal (LR) algorithm, based on a novel formalization that assigns binary labels to the links of the input DAG. We characterize the legal link labelings for which LR is guaranteed to establish routes. Moreover, we give an exact expression for the number of steps - called work complexity - taken by each node in every execution of LR from any legal input graph. Exact expressions for the per-node work complexity of Full Reversal and Partial Reversal follow from our general formula;this is the first exact expression known for Partial Reversal. Our binary link labels formalism facilitates comparison of the work complexity of certain link labelings - including those corresponding to Full Reversal and Partial Reversal - using game theory. We consider labelings in which all incoming links of a given node i are labeled with the same binary value mu(i). Finding initial labelings that induce good work complexity can be considered as a game in which to each node i a player is associated who has strategy mu(i). In this game, one tries to minimize the cost, i. e., the number of steps. Modeling the initial labelings as this game allows us to compare the work complexity of Full Reversal and Partial Reversal in a way that provides a rigorous basis for the intuition that Partial Reversal is better than Full Reversal with respect to work complexity.
We examine in this paper a variant of the bin packing problem, where it is permissible to fragment the objects while packing them into bins of fixed capacity. We call this the Fragmentable Object Bin Packing problem (...
详细信息
We examine in this paper a variant of the bin packing problem, where it is permissible to fragment the objects while packing them into bins of fixed capacity. We call this the Fragmentable Object Bin Packing problem (FOBP). Fragmentation is associated with a cost, leading to the consumption of additional bin capacity. We show that the problem and its absolute approximation are both NP-complete. This is an interesting problem because if the cost of fragmentation is nullified then the problem can be easily solved optimally. If fragmentation is not permitted, then we get the usual bin packing problem. The application comes from a problem in data path synthesis where it is some times necessary to schedule data transfers, subject to restrictions arising from the underlying hardware. We show that FOBP reduces to a simplified version of this problem, thereby proving that it is also a similar hard problem. (C) 1998 Elsevier Science Ltd. All rights reserved.
We study the complexity of a noninterior path-following method for the linear complementarity problem. The method is based on the Chen-Harker-Kanzow-Smale smoothing function. It is assumed that the matrix M is either ...
详细信息
We study the complexity of a noninterior path-following method for the linear complementarity problem. The method is based on the Chen-Harker-Kanzow-Smale smoothing function. It is assumed that the matrix M is either a P-matrix or symmetric and positive definite. When M is a P-matrix, it is shown that the algorithm finds a solution satisfying the conditions Mx-y+q=0 and parallel tomin{x, y}parallel to(infinity)less than or equal toepsilon in at most (sic) ((2-beta)(1+(1/l(M)))(2)log((1+(1/2)beta)mu(0))/epsilon)) Newton iterations;here, beta and mu(0) depend on the initial point, l (M) depends on M, and epsilon > 0. When M is symmetric and positive definite, the complexity bound is (sic) ((2+beta)C(2)log((1+(1/2)betamu(0))/epsilon), where, C=1+(rootnroot(min{lambda(min)(M), 1/lambda(max)(M)}), and lambdamin(M), lambdamax(M) are the smallest and largest eigenvalues of M.
In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization(SDO) problems. The proposed algorithm is based on a new kernel fun...
详细信息
In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization(SDO) problems. The proposed algorithm is based on a new kernel function which differs from the existing kernel functions in which it has a double barrier term. With this function we define a new search direction and also a new proximity function for analyzing its complexity. We show that if q1 〉 q2 〉 1, the algorithm has O((q1 + 1) nq1+1/2(q1-q2)logn/ε)and O((q1 + 1)2(q1-q2)^3q1-2q2+1√n logn/c) complexity results for large- and small-update methods, respectively.
In this paper, we present a primal-dual interior point algorithm for linearly constrained convex optimization (LCCO). The algorithm uses only full-Newton step to update iterates with an appropriate proximity measure f...
详细信息
In this paper, we present a primal-dual interior point algorithm for linearly constrained convex optimization (LCCO). The algorithm uses only full-Newton step to update iterates with an appropriate proximity measure for controlling feasible iterations near the central path during the solution process. The favorable polynomial complexity bound for the algorithm with short-step method is obtained, namely O(root n log n/epsilon) which is as good as the linear and convex quadratic optimization analogue. Numerical results are reported to show the efficiency of the algorithm.
Network flow problems with quadratic separable costs appear in a number of important applications such as; approximating input-output matrices in economy; projecting and forecasting traffic matrices in telecommunicati...
详细信息
Network flow problems with quadratic separable costs appear in a number of important applications such as; approximating input-output matrices in economy; projecting and forecasting traffic matrices in telecommunication networks; solving nondifferentiable cost flow problems by subgradient algorithms. It is shown that the scaling technique introduced by Edmonds and Karp (1972) in the case of linear cost flows for deriving a polynomial complexity bound for the out-of-kilter method, may be extended to quadratic cost flows and leads to a polynomial algorithm for this class of problems. The method may be applied to the solution of singly constrained quadratic programs and thus provides an alternative approach to the polynomial algorithm suggested by Helgason, Kennington and Lall (1980).
The purpose of the paper is twofold. First, we want to search for a more efficient sample sort. Secondly, by analyzing a variant of Samplesort, we want to settle an open problem: the average case analysis of Proportio...
详细信息
The purpose of the paper is twofold. First, we want to search for a more efficient sample sort. Secondly, by analyzing a variant of Samplesort, we want to settle an open problem: the average case analysis of Proportion Extend Sort (PEsort for short). An efficient variant of Samplesort given in the paper is called full sample sort. This algorithm is simple. It has a shorter object code and is almost as fast as PEsort. Theoretically, we show that full sample sort with a linear sampling size performs at most n log n + O(n) comparisons and O(n log n) exchanges on the average, but O(n log(2) n) comparisons in the worst case. This is an improvement on the original Samplesort by Frazer and McKellar, which requires n log n + O(n log log n) comparisons on the average and O(n(2)) comparisons in the worst case. On the other hand, we use the same analyzing approach to show that PEsort, with any p > 0, performs also at most n log n + O (n) comparisons on the average. Notice, Cole and Kandathil analyzed only the case p = 1 of PEsort. For any p > 0, they did not. Namely, their approach is suitable only for a special case such as p = 1, while our approach is suitable for the generalized case. (c) 2006 Elsevier B.V. All rights reserved.
暂无评论