The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of...
详细信息
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + [log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.
In this paper, we consider the connected -Center () problem, which can be seen as the classic -Center problem with the constraint of internal connectedness, i.e., two nodes in a cluster are required to be connected by...
详细信息
In this paper, we consider the connected -Center () problem, which can be seen as the classic -Center problem with the constraint of internal connectedness, i.e., two nodes in a cluster are required to be connected by an internal path in the same cluster. was first introduced by Ge et al. (ACM Trans Knowl Discov Data 2:7, 2008), in which they showed the -completeness for this problem and claimed a polynomial time approximation algorithm for it. However, the running time of their algorithm might not be polynomial, as one key step of their algorithm involves the computation of an -hard problem. We first present a simple polynomial time greedy-based -approximation algorithm for the relaxation of -the . Further, we give a -approximation algorithm for .
We consider a network, where a special data called certificate is issued between two users, and all certificates issued by the users in the network can be represented by a directed graph. For any two users u and v, wh...
详细信息
We consider a network, where a special data called certificate is issued between two users, and all certificates issued by the users in the network can be represented by a directed graph. For any two users u and v, when u needs to send a message to u securely, v's public-key is needed. The user u can obtain v's public-key using the certificates stored in u and v. We need to disperse the certificates to the users such that when a user wants to send a message to the other user securely, there are enough certificates in them to get the reliable public-key. In this paper, when a certificate graph and a set of communication requests are given, we consider the problem to disperse the certificates among the nodes in the network, such that the communication requests are satisfied and the total number of certificates stored in the nodes is minimized. We formulate this problem as MINIMUM CERTIFICATE DISPERSAL (MCD for short). We show that MCD is NP-Complete, even if its input graph is restricted to a strongly connected graph. We also present a polynomial-time 2-approximation algorithm MinPivot for strongly connected graphs, when the communication requests satisfy some restrictions. We introduce some graph classes for which MinPivot can compute optimal dispersals, such as trees, rings, and some Cartesian products of graphs.
In this paper,we consider the P-prize-collecting set cover(P-PCSC)problem,which is a generalization of the set cover *** this problem,we are given a set system(U,S),where U is a ground set and S⊆2U is a collection of ...
详细信息
In this paper,we consider the P-prize-collecting set cover(P-PCSC)problem,which is a generalization of the set cover *** this problem,we are given a set system(U,S),where U is a ground set and S⊆2U is a collection of subsets satisfying∪S∈SS=*** subset in S has a nonnegative cost,and every element in U has a nonnegative penalty cost and a nonnegative *** goal is to find a subcollection C⊆S such that the total cost,consisting of the cost of subsets in C and the penalty cost of the elements not covered by C,is minimized and at the same time the combined profit of the elements covered by C is at least P,a specified profit *** main work is to obtain a 2f+ε-approximation algorithm for the P-PCSC problem by using the primal-dual and Lagrangian relaxation methods,where f is the maximum frequency of an element in the given set system(U,S)andεis a fixed positive number.
The vertex k-center selection problem is a well known NP-Hard minimization problem that can not be solved in polynomial time within a p < 2 approximation factor, unless P = NP . Even though there are algorithms tha...
详细信息
The vertex k-center selection problem is a well known NP-Hard minimization problem that can not be solved in polynomial time within a p < 2 approximation factor, unless P = NP . Even though there are algorithms that achieve this 2-approximation bound, they perform poorly on most benchmarks compared to some heuristic algorithms. This seems to happen because the 2-approximation algorithms take, at every step, very conservative decisions in order to keep the approximation guarantee. In this paper we propose an algorithm that exploits the same structural properties of the problem that the 2-approximation algorithms use, but in a more relaxed manner. Instead of taking the decision that guarantees a 2-approximation, our algorithm takes the best decision near the one that guarantees the 2-approximation. This results in an algorithm with a worse approximation factor (a 3-approximation), but that outperforms all the previously known approximation algorithms on the most well known benchmarks for the problem, namely, the pmed instances from OR-Lib (Beasly in J Oper Res Soc 41(11):1069-1072, 1990) and some instances from TSP-Lib (Reinelt in ORSA J Comput 3:376-384, 1991). However, the O(n(4)) running time of this algorithm becomes unpractical as the input grows. In order to improve its running time, we modified this algorithm obtaining a heuristic that outperforms not only all the previously known approximation algorithms, but all the polynomial heuristics proposed up to date.
One ultimate goal of wireless sensor networks is to collect the sensed data from a set of sensors and transmit them to some sink node via a data gathering tree. In this work, we are interested in data aggregation, whe...
详细信息
One ultimate goal of wireless sensor networks is to collect the sensed data from a set of sensors and transmit them to some sink node via a data gathering tree. In this work, we are interested in data aggregation, where the sink node wants to know the value for a certain function of all sensed data, such as minimum, maximum, average, and summation. Given a data aggregation tree, sensors receive messages from children periodically, merge them with its own packet, and send the new packet to its parent. The problem of finding an aggregation tree with the maximum lifetime has been proved to be NP-hard and can be generalized to finding a spanning tree with the minimum maximum vertex load, where the load of a vertex is a nondecreasing function of its degree in the tree. Although there is a rich body of research in those problems, they either fail to meet a theoretical bound or need high running time. In this paper, we develop a novel algorithm with provable performance bounds for the generalized problem. We show that the running time of our algorithm is in the order of O(mn alpha)(m, n)), where m is the number of edges, n is the number of sensors, and alpha is the inverse Ackerman function. Though our work is motivated by applications in sensor networks, the proposed algorithm is general enough to handle a wide range of degree-oriented spanning tree problems, including bounded degree spanning tree problem and minimum degree spanning tree problem. When applied to these problems, it incurs a lower computational cost in comparison to existing methods. Simulation results validate our theoretical analysis.
Considerable attention has been paid to sweep coverage in Wireless Sensor Networks (WSNs) for the past few years. In sweep coverage, mobile sensor nodes are scheduled to visit Points of Interests (POIs) periodically. ...
详细信息
Considerable attention has been paid to sweep coverage in Wireless Sensor Networks (WSNs) for the past few years. In sweep coverage, mobile sensor nodes are scheduled to visit Points of Interests (POIs) periodically. Due to the limited energy of mobile sensors, energy consumption poses an important challenge in sweep coverage. In this paper, we extend the Energy Restricted Sweep Coverage to the General Energy Restricted Sweep Coverage (GERSC) by assuming the unit energy consumption of mobile sensors varies in different road sections. The goal of GERSC is also to minimize the required number of mobile sensors in sweep coverage under the energy constraint. In GERSC, the approximation ratio of the state-of-the-art algorithm named ERSweepCoverage is no longer guaranteed. Therefore, we devise a constant-factor approximation algorithm named IERSC for GERSC. The approximation ratio of IERSC is beta[3 alpha/min(theta,1-theta) + 1]. theta is the predefined parameter of IERSC ranging from 0 to 1. With theta being 12, IERSC has the best approximation ratio of (6 alpha + 1)beta. Finally, the promising experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. (C) 2021 Elsevier B.V. All rights reserved.
We present a primal-dual aOElog(n)aOE parts per thousand-approximation algorithm for the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visit...
详细信息
We present a primal-dual aOElog(n)aOE parts per thousand-approximation algorithm for the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visits a subset of vertices such that the length of the tour plus the sum of penalties associated with vertices not in the tour is as small as possible. The previous algorithm for the problem (V.H. Nguyen and T.T Nguyen in Int. J. Math. Oper. Res. 4(3):294-301, 2012) which is not combinatorial, is based on the Held-Karp relaxation and heuristic methods such as the Frieze et al.'s heuristic (Frieze et al. in Networks 12:23-39, 1982) or the recent Asadpour et al.'s heuristic for the ATSP (Asadpour et al. in 21st ACM-SIAM symposium on discrete algorithms, 2010). Depending on which of the two heuristics is used, it gives respectively 1+aOElog(n)aOE parts per thousand and as an approximation ratio. Our algorithm achieves an approximation ratio of aOElog(n)aOE parts per thousand which is weaker than but represents the first combinatorial approximation algorithm for the Asymmetric Prize-Collecting TSP.
The hitting set problem is a generalization of the vertex cover problem to hypergraphs. Xu et al. (Theor Comput Sci 630:117-125, 2016) presented a primal-dual algorithm for the submodular vertex cover problem with lin...
详细信息
The hitting set problem is a generalization of the vertex cover problem to hypergraphs. Xu et al. (Theor Comput Sci 630:117-125, 2016) presented a primal-dual algorithm for the submodular vertex cover problem with linear/submodular penalties. Motivated by their work, we study the submodular hitting set problem with linear penalties (SHSLP). The goal of the SHSLP is to select a vertex subset in the hypergraph to cover some hyperedges and penalize the uncovered ones such that the total cost of covering and penalty is minimized. Based on the primal-dual scheme, we obtain ak-approximation algorithm for the SHSLP, wherekis the maximum number of vertices in all hyperedges.
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length it...
详细信息
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length it and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n) log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log...log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g = Omega(log n) and g = O(n/log(k)n) for strings from k-letter alphabet [12].
暂无评论