We consider the a priori traveling repairman problem, which is a stochastic version of the classic traveling repairman problem. Given a metric (V, d) with a root r is an element of V, the traveling repairman problem (...
详细信息
We consider the a priori traveling repairman problem, which is a stochastic version of the classic traveling repairman problem. Given a metric (V, d) with a root r is an element of V, the traveling repairman problem (TRP) involves finding a tour originating from r that minimizes the sum of arrival-times at all vertices. In its a priori version, we are also given independent probabilities of each vertex being active. We want to find a master tour tau originating from r and visiting all vertices. The objective is to minimize the expected sum of arrival-times at all active vertices, when tau is shortcut over the inactive vertices. We obtain the first constant-factor approximation algorithm for a priori TRP under independent non-uniform probabilities. Our result provides a general reduction from non-uniform to uniform probabilities, and uses (in black-box manner) an existing approximation algorithm for a priori TRP under uniform probabilities. (C) 2020 Elsevier B.V. All rights reserved.
We consider the NP-hard twin robot scheduling problem, which was introduced by Erdogan et al. (Naval Res Logist (NRL) 61(2):119-130, 2014). Here, two moving robots positioned at the opposite ends of a rail have to per...
详细信息
We consider the NP-hard twin robot scheduling problem, which was introduced by Erdogan et al. (Naval Res Logist (NRL) 61(2):119-130, 2014). Here, two moving robots positioned at the opposite ends of a rail have to perform automated storage and retrieval jobs at given positions along the gantry rail with a non-crossing constraint. The objective is to minimize the makespan. We extend the original problem by considering pickup and delivery times and present exact and approximation algorithms with a performance ratio of approximate to 1.1716 for large instances. Further, we compare the presented algorithms in a comprehensive numerical study.
k-center is one of the most popular clustering models. While it admits a simple 2-approximation in polynomial time in general metrics, the Euclidean version is NP-hard to approximate within a factor of 1.93, even in t...
详细信息
ISBN:
(纸本)9781577358763
k-center is one of the most popular clustering models. While it admits a simple 2-approximation in polynomial time in general metrics, the Euclidean version is NP-hard to approximate within a factor of 1.93, even in the plane, if one insists the dependence on k in the running time be polynomial. Without this restriction, a classic algorithm yields a 2(O((k log k)/epsilon)) dn-time (1 + epsilon) -approximation for Euclidean k-center, where d is the dimension. We give a faster algorithm for small dimensions: roughly speaking an O* (2(O((1/epsilon)O(d).k1-1/*** k)))-time (1 + epsilon )-approximation. In particular, the running time is roughly O*(2(O((1/epsilon)O(1)) (root k log k))) in the plane. We complement our algorithmic result with a matching hardness lower bound. We also consider a well-studied generalization of k-center, called Non-uniform k-center (NUkC), where we allow different radii clusters. NUkC is NP-hard to approximate within any factor, even in the Euclidean case. We design a 2(O(k log k))n(2) time 3-approximation for NUkC in general metrics, and a 2(O((k log k)/epsilon))dn time (1 + epsilon)- -approximation for Euclidean NUkC. The latter time bound matches the bound for k-center.
A vertex cover of a graph is a set of vertices of the graph such that every edge has at least one endpoint in it. In this work, we study Weighted Vertex Cover with solution size as a parameter. Formally, in the (k, W)...
详细信息
A vertex cover of a graph is a set of vertices of the graph such that every edge has at least one endpoint in it. In this work, we study Weighted Vertex Cover with solution size as a parameter. Formally, in the (k, W)-Vertex Cover problem, given a graph G, an integer k, a positive rational W, and a weight function w:V(G)-> Q+, the question is whether G has a vertex cover of size at most k of weight at most W, with k being the parameter. An (a,b)-bi-criteria approximation algorithm for (k,W)-Vertex Cover either produces a vertex cover S such that |S|<= ak and w(S)<= bW, or decides that there is no vertex cover of size at most k of weight at most W. We obtain the following results. center dot A simple (2,2)-bi-criteria approximation algorithm for (k,W)-Vertex Cover in polynomial time by modifying the standard LP-rounding algorithm. center dot A simple exact parameterized algorithm for (k,W)-Vertex Cover running in O*(1.4656(k)) time(1). center dot A (1+epsilon,2)-approximation algorithm for (k,W)-Vertex Cover running in O*(1.4656((1-epsilon)k)) time. center dot A (1.5,1.5)-approximation algorithm for (k,W)-Vertex Cover running in O*(1.414(k)) time. center dot A (2-delta,2-delta)-approximation algorithm for (k,W)-Vertex Cover running in O*(Sigma(delta k(1-2 delta)/2 delta)(i=delta k(1-2 delta)/1+2 delta) (delta k+i delta k-2i delta 1-2 delta)) time for any delta<0.5. For example, for (1.75,1.75) and (1.9,1.9)-approximation algorithms, we get running times of O*(1.272(k)) and O*(1.151(k)) respectively. Our algorithms (expectedly) do not improve upon the running times of the existing algorithms for the unweighted version of Vertex Cover. When compared to algorithms for the weighted version, our algorithms are the first ones to the best of our knowledge which work with arbitrary weights, and they perform well when the solution size is much smaller than the total weight of the desired solution.
Training a one-node neural network with the ReLU activation function via optimization, which we refer to as the ON-ReLU problem, is a fundamental problem in machine learning. In this paper, we begin by proving the NP-...
详细信息
Training a one-node neural network with the ReLU activation function via optimization, which we refer to as the ON-ReLU problem, is a fundamental problem in machine learning. In this paper, we begin by proving the NP-hardness of the ON-ReLU problem. We then present an approximation algorithm to solve the ON-ReLU problem, whose running time is O(n(k)) where n is the number of samples, and k is a predefined integral constant as an algorithm parameter. We analyze the performance of this algorithm under two regimes and show that: (1) given any arbitrary set of training samples, the algorithm guarantees an (n/k)-approximation for the ON-ReLU problem - to the best of our knowledge, this is the first time that an algorithm guarantees an approximation ratio for arbitrary data scenario;thus, in the ideal case (i.e., when the training error is zero) the approximation algorithm achieves the globally optimal solution for the ON-ReLU problem;and (2) given training sample with Gaussian noise, the same approximation algorithm achieves a much better asymptotic approximation ratiowhich is independent of the number of samples n. Extensive numerical studies show that our approximation algorithm can perform better than the gradient descent algorithm. Our numerical results also show that the solution of the approximation algorithm can provide a good initialization for gradient descent, which can significantly improve the performance.
In this paper we study the min-max cycle cover problem with neighborhoods, which is to find a given number of K cycles to collaboratively visit n Points of Interest (POIs) in a 2D space such that the length of the lon...
详细信息
In this paper we study the min-max cycle cover problem with neighborhoods, which is to find a given number of K cycles to collaboratively visit n Points of Interest (POIs) in a 2D space such that the length of the longest cycle among the K cycles is minimized. The problem arises from many applications, including employing mobile sinks to collect sensor data in wireless sensor networks (WSNs), dispatching charging vehicles to recharge sensors in rechargeable sensor networks, scheduling Unmanned Aerial Vehicles (UAVs) to monitor disaster areas, etc. For example, consider the application of employing multiple mobile sinks to collect sensor data in WSNs. If some mobile sink has a long data collection tour while the other mobile sinks have short tours, this incurs a long data collection latency of the sensors in the long tour. Existing studies assumed that one vehicle needs to move to the location of a POI to serve it. We however assume that the vehicle is able to serve the POI as long as the vehicle is within the neighborhood area of the POI. One such an example is that a mobile sink in a WSN can receive data from a sensor if it is within the transmission range of the sensor (e.g., within 50 meters). It can be seen that the ignorance of neighborhoods will incur a longer traveling length. On the other hand, most existing studies only took into account the vehicle traveling time but ignore the POI service time. Consequently, although the length of some vehicle tour is short, the total amount of time consumed by a vehicle in the tour is prohibitively long, due to many POIs in the tour. In this paper we first study the min-max cycle cover problem with neighborhoods, by incorporating both neighborhoods and POI service time into consideration. We then propose novel approximation algorithms for the problem, by exploring the combinatorial properties of the problem. We finally evaluate the proposed algorithms via experimental simulations. Experimental results show that the propo
We study a constrained version of the GEOMETRIC HITTING SET problem where we are given a set of points, partitioned into pairwise disjoint subsets, and a set of intervals. The objective is to hit all the intervals wit...
详细信息
We study a constrained version of the GEOMETRIC HITTING SET problem where we are given a set of points, partitioned into pairwise disjoint subsets, and a set of intervals. The objective is to hit all the intervals with a minimum number of points such that if we select a point from a subset, we must select all the points from that subset. We consider two special cases of the problem where each subset can have at most 2 and 3 points. If each subset contains at most 2 points and the intervals are disjoint, we show that the problem admits a polynomial -time algorithm. On the contrary, if each subset contains at most t points, where t >= 2, and the intervals are overlapping, we show that the problem becomes NP -Hard. Further, when each subset contains at most t points where t >= 3, and the intervals are disjoint, we prove that the problem is NP -Hard, and we provide two constant factor approximation algorithms for this problem. We also study the problem from the parameterized complexity perspective. If the intervals are disjoint, then we prove that the problem is in FPT when parameterized by the size of the solution. We also complement this result by giving a lower bound in the size of the kernel for disjoint intervals, and we also provide a polynomial kernel when the size of all subsets is bounded by a constant.
Recently, due to an increasing interest for transparency in artificial intelligence, several methods of explainable machine learning have been developed with the simultaneous goal of accuracy and interpretability by h...
详细信息
ISBN:
(纸本)9781611977073
Recently, due to an increasing interest for transparency in artificial intelligence, several methods of explainable machine learning have been developed with the simultaneous goal of accuracy and interpretability by humans. In this paper, we study a recent framework of explainable clustering first suggested by Dasgupta et al. [11]. Specifically, we focus on the k-means and k-median problems and provide nearly tight upper and lower bounds. First, we provide an O(log k log log k)-approximation algorithm for explainable k-median, improving on the best known algorithm of O(k) [11] and nearly matching the known Omega(log k) lower bound [11]. In addition, in low-dimensional spaces d << log k, we show that our algorithm also provides an O(d log(2) d)-approximate solution for explainable k-median. This improves over the best known bound of O(d log k) for low dimensions [19], and is a constant for constant dimensional spaces. To complement this, we show a nearly matching Omega(d) lower bound. Next, we study the k-means problem in this context and provide an O(k log k)-approximation algorithm for explainable k-means, improving over the O(k(2)) bound of Dasgupta et al. and the O(dk log k) bound of [19]. To complement this we provide an almost tight Omega(k) lower bound, improving over the Omega(log k) lower bound of Dasgupta et al. Given an approximate solution to the classic k-means and k-median, our algorithm for k-median runs in time O(kd log(2) k) and our algorithm for k-means runs in time O(k(2) d).
We consider the k-clustering problem with l(p)-norm cost, which includes k-median, kmeans and k-center, under an individual notion of fairness proposed by Jung et al. [2020]: given a set of points P of size n, a set o...
详细信息
We consider the k-clustering problem with l(p)-norm cost, which includes k-median, kmeans and k-center, under an individual notion of fairness proposed by Jung et al. [2020]: given a set of points P of size n, a set of k centers induces a fair clustering if every point in P has a center among its n/k closest neighbors. Mahabadi and Vakilian [2020] presented a (p(o(P)), 7)-bicriteria approximation for fair clustering with l(p)-norm cost: every point finds a center within distance at most 7 times its distance to its (n/k)-th closest neighbor and the l(p)-norm cost of the solution is at most p(o(P)) times the cost of an optimal fair solution. In this work, for any epsilon > 0, we present an improved (16(P) + epsilon, 3)-bicriteria for this problem. Moreover, for p = 1 (k-median) and p = infinity (k-center), we present improved cost-approximation factors 7.081 + epsilon and 3 + epsilon respectively. To achieve our guarantees, we extend the framework of [Charikar et al., 2002, Swam3T, 2016] and devise a 16P-approximation algorithm for the facility location with l(p)-norm cost under matroid constraint which might be of an independent interest. Besides, our approach suggests a reduction from our individually fair clustering to a clustering with a group fairness requirement proposed by Kleindessner et al. [2019], which is essentially the median matroid problem [Krishnaswamy et al., 2011].
The modularity is the best known and widely used quality function for community detection in graphs. We investigate the approximability of the modularity maximization problem and some related problems. We first design...
详细信息
The modularity is the best known and widely used quality function for community detection in graphs. We investigate the approximability of the modularity maximization problem and some related problems. We first design a polynomial-time 0.4209-additive approximation algorithm for the modularity maximization problem, which improves the current best additive approximation error of 0.4672. Our theoretical analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a high modularity value. We next design a polynomial-time 0.1660-additive approximation algorithm for the maximum modularity cut problem. Finally, we extend our algorithm to some related problems. (C) 2020 Elsevier Inc. All rights reserved.
暂无评论