A prominent tool in many problems involving metric spaces is a notion of randomized low-diameter decomposition. Loosely speaking, beta-decomposition refers to a probability distribution over partitions of the metric i...
详细信息
A prominent tool in many problems involving metric spaces is a notion of randomized low-diameter decomposition. Loosely speaking, beta-decomposition refers to a probability distribution over partitions of the metric into sets of low diameter, such that nearby points (parameterized by beta>0) are likely to be "clustered" together. Applying this notion to the shortest-path metric in edge-weighted graphs, it is known that n-vertex graphs admit an O(ln n)-padded decomposition (Bartal, 37th annual symposium on foundations of computer science. IEEE, pp 184-193, 1996), and that excluded-minor graphs admit O(1)-padded decomposition (Klein et al., 25th annual ACM symposium on theory of computing, pp 682-690, 1993;Fakcharoenphol and Talwar, J Comput Syst Sci 69(3), 485-497, 2004;Abraham et al., Proceedings of the 46th annual ACM symposium on theory of computing. STOC '14, pp 79-88. ACM, New York, NY, USA, 2014). We design decompositions to the family of p-path-separable graphs, which was defined by Abraham and Gavoille (Proceedings of the twenty-fifth annual acm symposium on principles of distributed computing, PODC '06, pp 188-197, 2006) and refers to graphs that admit vertex-separators consisting of at most p shortest paths in the graph. Our main result is that every p-path-separable n-vertex graph admits an o(ln(p ln n))-decomposition, which refines the o(ln n) bound for general graphs, and provides new bounds for families like bounded-treewidth graphs. Technically, our clustering process differs from previous ones by working in (the shortest-path metric of) carefully chosen subgraphs.
Constraint tightening to non-conservatively guarantee recursive feasibility and stability in Stochastic Model Predictive Control is addressed. Stability and feasibility requirements are considered separately, highligh...
详细信息
Constraint tightening to non-conservatively guarantee recursive feasibility and stability in Stochastic Model Predictive Control is addressed. Stability and feasibility requirements are considered separately, highlighting the difference between existence of a solution and feasibility of a suitable, a priori known candidate solution. Subsequently, a Stochastic Model Predictive Control algorithm which unifies previous results is derived, leaving the designer the option to balance an increased feasible region against guaranteed bounds on the asymptotic average performance and convergence time. Besides typical performance bounds, under mild assumptions, we prove asymptotic stability in probability of the minimal robust positively invariant set obtained by the unconstrained LQ-optimal controller. A numerical example, demonstrating the efficacy of the proposed approach in comparison with classical, recursively feasible Stochastic MPC and Robust MPC, is provided.
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorit...
详细信息
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorithm does not use flows or linear programming. Hence we provide a positive answer to a question posed by Williamson in 1998 on whether such an algorithm exists. Moreover, we show that Johnson's greedy algorithm cannot guarantee a 3/4-approximation, even if the variables are processed in a random order. Thereby we partially solve a problem posed by Chen, Friesen, and Zheng in 1999. In order to explore the limitations of the greedy paradigm, we use the model of priority algorithms of Borodin, Nielsen, and Rackoff. Since our greedy algorithm works in an online scenario where the variables arrive with their set of undecided clauses, we wonder if a better approximation ratio can be obtained by further fine-tuning its random decisions. For a particular information model we show that no priority algorithm can approximate Online MAX SAT within 3/4 + epsilon (for any epsilon > 0). We further investigate the strength of deterministic greedy algorithms that may choose the variable ordering. Here we show that no adaptive priority algorithm can achieve approximation ratio 3/4. We propose two ways in which this inapproximability result can be bypassed. First we show that if our greedy algorithm is additionally given the variable assignments of an optimal solution to the canonical LP relaxation, then we can derandomize its decisions while preserving the overall approximation guarantee. Second we give a simple, deterministic algorithm that performs an additional pass over the input. We show that this 2-pass algorithm satisfies clauses with a total weight of at least 3/4 OPTLP, where OPTLP is the objective value of the canonical linear program. Moreover, we demonstrate that our analysis is tight and detail how each pass can be implemented in linear time.
The CUR decomposition of an m x n matrix A finds an m x c matrix C with a subset of c < n columns of A, together with an r x n matrix R with a subset of r < m rows of A, as well as a c x r low-rank matrix U such...
详细信息
The CUR decomposition of an m x n matrix A finds an m x c matrix C with a subset of c < n columns of A, together with an r x n matrix R with a subset of r < m rows of A, as well as a c x r low-rank matrix U such that the matrix CUR approximates the matrix A, that is, parallel to A - CUR parallel to(2)(F) <= (1 + epsilon) parallel to A - A(k) parallel to(2)(F), where parallel *** to(F) denotes the Frobenius norm and A(k) is the best m x n matrix of rank k constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where c = O(k/epsilon) and r = O(k/epsilon) and rank(U) = k. Up to constant factors, our algorithms are simultaneously optimal in the values c, r, and rank(U).
In this brief, a novel scheme to multi-aircraft conflict detection and resolution is introduced. A key feature of the proposed scheme is that uncertainty affecting the aircraft future positions along some look-ahead p...
详细信息
In this brief, a novel scheme to multi-aircraft conflict detection and resolution is introduced. A key feature of the proposed scheme is that uncertainty affecting the aircraft future positions along some look-ahead prediction horizon is accounted for via a probabilistic reachability analysis approach. In particular, ellipsoidal probabilistic reach sets are determined by formulating a chance-constrained optimization problem and solving it via a simulation-based method called scenario approach. Conflict detection is then performed by verifying if the ellipsoidal reach sets of different aircraft intersect. If a conflict is detected, then the aircraft flight plans are redesigned by solving a second-order cone program resting on the approximation of the ellipsoidal reach sets with spheres with constant radius along the look-ahead horizon. A bisection procedure allows one to determine the minimum radius such that the ellipsoidal reach sets of different aircraft along the corresponding new flight plans do not intersect. Some numerical examples are presented to show the efficacy of the proposed scheme.
Given a graph with "uncertainty intervals" on the edges, we want to identify a minimum spanning tree by querying some edges for their exact edge weights which lie in the given uncertainty intervals. Our obje...
详细信息
Given a graph with "uncertainty intervals" on the edges, we want to identify a minimum spanning tree by querying some edges for their exact edge weights which lie in the given uncertainty intervals. Our objective is to minimize the number of edge queries. It is known that there is a deterministic algorithm with best possible competitive ratio 2 [T. Erlebach, et al., in Proceedings of STACS, Schloss Dagstuhl, Dagstuhl, Germany, 2008, pp. 277-288]. Our main result is a randomized algorithm with expected competitive ratio 1 + root 2 approximate to 1.707, solving the long-standing open problem of whether an expected competitive ratio strictly less than 2 can be achieved [T. Erlebach and M. Hoffmann, Bull. Eur. Assoc. Theor. Comput. Sci. EATCS, 116 (2015)]. We also present novel results for various extensions, including arbitrary matroids and more general querying models.
In the Directed Rural Postman Problem (DRPP), given a strongly connected directed multigraph D = (V, A) with nonnegative integral weights on the arcs, a subset R of required arcs and a nonnegative integer l, decide wh...
详细信息
In the Directed Rural Postman Problem (DRPP), given a strongly connected directed multigraph D = (V, A) with nonnegative integral weights on the arcs, a subset R of required arcs and a nonnegative integer l, decide whether D has a closed directed walk containing every arc of R and of weight at most l. Let k be the number of weakly connected components in the subgraph of D induced by R. Sorge et al. [30] asked whether the DRPP is fixed-parameter tractable (FPT) when parameterized by k, i.e., whether there is an algorithm of running time O*(f(k)) where f is a function of k only and the O* notation suppresses polynomial factors. Using an algebraic approach, we prove that DRPP has a randomized algorithm of running time O*(2(k)) when l is bounded by a polynomial in the number of vertices in D. The same result holds for the undirected version of DRPP. (C) 2016 Elsevier Inc. All rights reserved.
Since Tinhofer proposed the MINGREEDY algorithm for maximum cardinality matching in 1984, several experimental studies found the randomized algorithm to perform excellently for various classes of random graphs and ben...
详细信息
Since Tinhofer proposed the MINGREEDY algorithm for maximum cardinality matching in 1984, several experimental studies found the randomized algorithm to perform excellently for various classes of random graphs and benchmark instances. In contrast, only few analytical results are known. We show that MINGREEDY cannot improve on the trivial approximation ratio of 1/2 whp., even for bipartite graphs. Our hard inputs seem to require a small number of high-degree nodes. This motivates an investigation of greedy algorithms on graphs with maximum degree Delta: we show that MINGREEDY achieves a Delta-1/2 Delta-3-approximation for graphs with Delta=3 and for Delta-regular graphs, and a guarantee of Delta-1/2/2 Delta-2 for graphs with maximum degree Delta. Interestingly, our bounds even hold for the deterministic MINGREEDY that breaks all ties arbitrarily. Moreover, we investigate the limitations of the greedy paradigm, using the model of priority algorithms introduced by Borodin, Nielsen, and Rackoff. We study deterministic priority algorithms and prove a Delta-1/2 Delta-3 -inapproximability result for graphs with maximum degree Delta;thus, these greedy algorithms do not achieve a 1/2+epsilon-approximation and in particular the 2/3-approximation obtained by the deterministic MINGREEDY for Delta=3 is optimal in this class. For k-uniform hypergraphs we show a tight 1/k-inapproximability bound. We also study fully randomized priority algorithms and give a 5/6-inapproximability bound. Thus, they cannot compete with matching algorithms of other paradigms.
We show that any m x n matrix of rank rho can be recovered exactly via nuclear norm minimization from circle minus (lambda . log(2)(m + n)) randomly sampled entries (lambda = (m + n)rho - rho(2) being the degrees of f...
详细信息
We show that any m x n matrix of rank rho can be recovered exactly via nuclear norm minimization from circle minus (lambda . log(2)(m + n)) randomly sampled entries (lambda = (m + n)rho - rho(2) being the degrees of freedom), if we observe each entry with probability proportional to the sum of corresponding row and column leverage scores, minus their product. This relaxation in probabilities (as opposed to sum of leverage scores in [1]) can give us O (rho(2)log(2)(m + n)) additive improvement on the (best known) sample size of [1]. Further, we can use our relaxed leverage score sampling to achieve additive improvement on sample size for exact recovery of (a) incoherent matrices (with restricted leverage scores), and (b) row (or column) incoherent matrices, without knowing the leverage scores a priori. (C) 2017 Elsevier B.V. All rights reserved.
The Lovasz local lemma (LLL), introduced by Erdos and Lovasz in 1975, is a powerful tool of the probabilistic method that allows one to prove that a set of n "bad" events do not happen with non-zero probabil...
详细信息
The Lovasz local lemma (LLL), introduced by Erdos and Lovasz in 1975, is a powerful tool of the probabilistic method that allows one to prove that a set of n "bad" events do not happen with non-zero probability, provided that the events have limited dependence. However, the LLL itself does not suggest how to find a point avoiding all bad events. Since the works of Alon (Random Struct algorithms 2(4): 367-378, 1991) and Beck (Random Struct algorithms 2(4): 343-365, 1991) there has been a sustained effort to find a constructive proof (i.e. an algorithm) for the LLL or weaker versions of it. In a major breakthrough Moser and Tardos (J ACM 57(2): 11, 2010) showed that a point avoiding all bad events can be found efficiently. They also proposed a distributed/parallel version of their algorithm that requires O(log(2) n) rounds of communication in a distributed network. In this paper we provide two new distributed algo-rithms for the LLL that improve on both the efficiency and simplicity of the Moser-Tardos algorithm. For clarity we express our results in terms of the symmetric LLL though both algorithms deal with the asymmetric version as well. Let p bound the probability of any bad event and d be the maximum degree in the dependency graph of the bad events. When epd(2) < 1 we give a truly simple LLL algorithm running in O(log(1/epd2) n) rounds. Under the weaker condition ep(d + 1) < 1, we give a slightly slower algorithm running in O(log(2) d . log(1/ep(d+1)) n) rounds. Furthermore, we give an algorithm that runs in sublogarithmic rounds under the condition p . f (d) < 1, where f (d) is an exponential function of d. Although the conditions of the LLL are locally verifiable, we prove that any distributed LLL algorithm requires Omega(log* n) rounds. In many graph coloring problems the existence of a valid coloring is established by one or more applications of the LLL. Using our LLL algorithms, we give logarithmic-time distributed algorithms for frugal coloring, defective c
暂无评论