Automatic engraving machines have several industrial applications, such as nameplates, identification codes, and bar code engraving. It can also be used for engraving messages or pictures on decorative items in homes....
详细信息
Automatic engraving machines have several industrial applications, such as nameplates, identification codes, and bar code engraving. It can also be used for engraving messages or pictures on decorative items in homes. Additionally, the proposed algorithm can be extended for obtaining the paths for 3D printers. The engraving process happens when the designs, messages, or pictures are carved into any item. The cost of engraving can be optimized by minimizing the movements of the automatic engraving machine. The movements include engraving motions and air motions. This work proposes a fast algorithm for obtaining the engraving path that guarantees the air motions are at most 150% of the optimal. Optimum engraving paths are commonly obtained using exponential time algorithms. The proposed algorithm is a 3/2 approximation algorithm that obtains the engraving path in polynomial time. The experimental results indicate that the proposed algorithm computes an efficient engraving path in a short time.
In two-tiered wireless sensor networks (WSNs), relay node placement is one of the key factors impacting the network energy consumption and the system overhead. In this paper, a novel connectivity-aware approximation a...
详细信息
In two-tiered wireless sensor networks (WSNs), relay node placement is one of the key factors impacting the network energy consumption and the system overhead. In this paper, a novel connectivity-aware approximation algorithm for relay node placement in the WSNs is proposed to offer a major step forward in saving system overhead. In particular, a unique local search approximation algorithm (LSAA) is introduced to solve the relay node single cover (RNSC) problem. In this proposed LSAA approach, the sensor nodes are allocated into groups and then a local set cover (SC) for each group is achieved by a local search algorithm. The union set of all the local SCs constitutes a SC of the RNSC problem. The approximation ratio and the time complexity of the LSAA are analyzed by rigorous proof. In addition, the LSAA approach has been extended to solve the relay node double cover problem. Then, a relay location selection algorithm (RLSA) is proposed to utilize the resulting SC from the LSAA in combining RLSA with the minimum spanning tree heuristic to build the high-tier connectivity. As the RLSA searches for a nearest location to the sink node for each relay node, the high-tier network built by the RLSA becomes denser than that by existing works. As a result, the number of added relay nodes for building the connectivity of the high-tier WSN can be significantly saved. Simulation results clearly demonstrate that the proposed LSAA outperforms the approaches reported in literature and the RLSA-based algorithm can noticeably save relay nodes newly deployed for the high-tier connectivity.
We consider a network, where a special data called certificate is issued between two users, and all certificates issued by the users in the network can be represented by a directed graph. For any two users u and v, wh...
详细信息
We consider a network, where a special data called certificate is issued between two users, and all certificates issued by the users in the network can be represented by a directed graph. For any two users u and v, when u needs to send a message to u securely, v's public-key is needed. The user u can obtain v's public-key using the certificates stored in u and v. We need to disperse the certificates to the users such that when a user wants to send a message to the other user securely, there are enough certificates in them to get the reliable public-key. In this paper, when a certificate graph and a set of communication requests are given, we consider the problem to disperse the certificates among the nodes in the network, such that the communication requests are satisfied and the total number of certificates stored in the nodes is minimized. We formulate this problem as MINIMUM CERTIFICATE DISPERSAL (MCD for short). We show that MCD is NP-Complete, even if its input graph is restricted to a strongly connected graph. We also present a polynomial-time 2-approximation algorithm MinPivot for strongly connected graphs, when the communication requests satisfy some restrictions. We introduce some graph classes for which MinPivot can compute optimal dispersals, such as trees, rings, and some Cartesian products of graphs.
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of...
详细信息
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + [log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.
In this paper, we consider the connected -Center () problem, which can be seen as the classic -Center problem with the constraint of internal connectedness, i.e., two nodes in a cluster are required to be connected by...
详细信息
In this paper, we consider the connected -Center () problem, which can be seen as the classic -Center problem with the constraint of internal connectedness, i.e., two nodes in a cluster are required to be connected by an internal path in the same cluster. was first introduced by Ge et al. (ACM Trans Knowl Discov Data 2:7, 2008), in which they showed the -completeness for this problem and claimed a polynomial time approximation algorithm for it. However, the running time of their algorithm might not be polynomial, as one key step of their algorithm involves the computation of an -hard problem. We first present a simple polynomial time greedy-based -approximation algorithm for the relaxation of -the . Further, we give a -approximation algorithm for .
The Multiple Subset Sum Problem (MSSP) is the variant of bin packing in which the number of bins is given and one would like to maximize the overall weight of the items packed in the bins. The problem is also a specia...
详细信息
The Multiple Subset Sum Problem (MSSP) is the variant of bin packing in which the number of bins is given and one would like to maximize the overall weight of the items packed in the bins. The problem is also a special case of the multiple knapsack problem in which all knapsacks have the same capacity and the item profits and weights coincide. Recently, polynomial time approximation schemes have been proposed for MSSP and its generalizations, see A. Caprara, H. Kellerer, and U. Pferschy (SIAM J. on Optimization, Vol. 11, pp. 308-319, 2000;Information Processing Letters, Vol. 73, pp. 111-118, 2000), C. Chekuri and S. Khanna (Proceedings of SODA 00, 2000, pp. 213-222), and H. Kellerer (Proceedings of APPROX, 1999, pp. 51-62). However, these schemes are only of theoretical interest, since they require either the solution of huge integer linear programs, or the enumeration of a huge number of possible solutions, for any reasonable value of required accuracy. In this paper, we present a polynomial-time 3/4-approximation algorithm which runs fast also in practice. Its running time is linear in the number of items and quadratic in the number of bins. The "core" of the algorithm is a procedure to pack triples of "large" items into the bins. As a by product of our analysis, we get the approximation guarantee for a natural greedy heuristic for the 3-Partitioning Problem.
The two-machine flow-shop sequencing problem with arbitrary release dates of jobs and the minimum makespan criterion is considered. The problem is known to be NP-hard, and the best-known approximation algorithms are t...
详细信息
The two-machine flow-shop sequencing problem with arbitrary release dates of jobs and the minimum makespan criterion is considered. The problem is known to be NP-hard, and the best-known approximation algorithms are those of Potts (Math. Oper. Res. 10 (1985) 576) with a worst-case performance ratio of 5/3 and running time O(n(3) log n), and a polynomial time approximation scheme of Hall (Proceedings of the 36th Annual Symposium on Foundations of Computer Science, IEEE Comput. Soc. press, Los Alamitos, 1995, pp. 82-91.) that can generate solutions arbitrary close to the optimum but with a high-time requirement. In this paper, we modify Potts' algorithm so that its worst-case performance ratio is reduced to 3/2, but its running time remains O(n(3) log n). (C) 2001 Elsevier Science B.V. All rights reserved.
In this paper,we consider the P-prize-collecting set cover(P-PCSC)problem,which is a generalization of the set cover *** this problem,we are given a set system(U,S),where U is a ground set and S⊆2U is a collection of ...
详细信息
In this paper,we consider the P-prize-collecting set cover(P-PCSC)problem,which is a generalization of the set cover *** this problem,we are given a set system(U,S),where U is a ground set and S⊆2U is a collection of subsets satisfying∪S∈SS=*** subset in S has a nonnegative cost,and every element in U has a nonnegative penalty cost and a nonnegative *** goal is to find a subcollection C⊆S such that the total cost,consisting of the cost of subsets in C and the penalty cost of the elements not covered by C,is minimized and at the same time the combined profit of the elements covered by C is at least P,a specified profit *** main work is to obtain a 2f+ε-approximation algorithm for the P-PCSC problem by using the primal-dual and Lagrangian relaxation methods,where f is the maximum frequency of an element in the given set system(U,S)andεis a fixed positive number.
The vertex k-center selection problem is a well known NP-Hard minimization problem that can not be solved in polynomial time within a p < 2 approximation factor, unless P = NP . Even though there are algorithms tha...
详细信息
The vertex k-center selection problem is a well known NP-Hard minimization problem that can not be solved in polynomial time within a p < 2 approximation factor, unless P = NP . Even though there are algorithms that achieve this 2-approximation bound, they perform poorly on most benchmarks compared to some heuristic algorithms. This seems to happen because the 2-approximation algorithms take, at every step, very conservative decisions in order to keep the approximation guarantee. In this paper we propose an algorithm that exploits the same structural properties of the problem that the 2-approximation algorithms use, but in a more relaxed manner. Instead of taking the decision that guarantees a 2-approximation, our algorithm takes the best decision near the one that guarantees the 2-approximation. This results in an algorithm with a worse approximation factor (a 3-approximation), but that outperforms all the previously known approximation algorithms on the most well known benchmarks for the problem, namely, the pmed instances from OR-Lib (Beasly in J Oper Res Soc 41(11):1069-1072, 1990) and some instances from TSP-Lib (Reinelt in ORSA J Comput 3:376-384, 1991). However, the O(n(4)) running time of this algorithm becomes unpractical as the input grows. In order to improve its running time, we modified this algorithm obtaining a heuristic that outperforms not only all the previously known approximation algorithms, but all the polynomial heuristics proposed up to date.
We present a primal-dual aOElog(n)aOE parts per thousand-approximation algorithm for the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visit...
详细信息
We present a primal-dual aOElog(n)aOE parts per thousand-approximation algorithm for the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visits a subset of vertices such that the length of the tour plus the sum of penalties associated with vertices not in the tour is as small as possible. The previous algorithm for the problem (V.H. Nguyen and T.T Nguyen in Int. J. Math. Oper. Res. 4(3):294-301, 2012) which is not combinatorial, is based on the Held-Karp relaxation and heuristic methods such as the Frieze et al.'s heuristic (Frieze et al. in Networks 12:23-39, 1982) or the recent Asadpour et al.'s heuristic for the ATSP (Asadpour et al. in 21st ACM-SIAM symposium on discrete algorithms, 2010). Depending on which of the two heuristics is used, it gives respectively 1+aOElog(n)aOE parts per thousand and as an approximation ratio. Our algorithm achieves an approximation ratio of aOElog(n)aOE parts per thousand which is weaker than but represents the first combinatorial approximation algorithm for the Asymmetric Prize-Collecting TSP.
暂无评论