In this paper we study the problem of minimizing a submodularfunction f : 2(V) -> R that is guaranteed to have a k-sparse minimizer. We give a deterministic algorithm that computes an additive epsilon-approximate ...
详细信息
ISBN:
(纸本)9798350318944
In this paper we study the problem of minimizing a submodularfunction f : 2(V) -> R that is guaranteed to have a k-sparse minimizer. We give a deterministic algorithm that computes an additive epsilon-approximate minimizer of such f in (O) over tilde (poly(k) log(|f|/epsilon) parallel depth using a polynomial number of queries to an evaluation oracle of f, where |f| = max(S subset of V) |f(S)|. Further, we give a randomized algorithm that computes an exact minimizer of f with high probability using (O) over tilde(|V|.poly(k)) queries and polynomial time. When k = (O) over tilde (1), our algorithms use either nearly-constant parallel depth or a nearly-linear number of evaluation oracle queries. All previous algorithms for this problem either use delta(|V|) parallel depth or Omega(|V|(2)) queries. In contrast to state-of-the-art weakly-polynomial and strongly-polynomial time algorithms for SFM, our algorithms use first-order optimization methods, e.g., mirror descent and follow the regularized leader. We introduce what we call sparse dual certificates, which encode information on the structure of sparse minimizers, and both our parallel and sequential algorithms provide new algorithmic tools for allowing first-order optimization methods to efficiently compute them. Correspondingly, our algorithm does not invoke fast matrix multiplication or general linear system solvers and in this sense is more combinatorial than previous state-of-the-art methods.
The problem of minimizing a submodularfunction (SFM) is a common generalization of several fundamental combinatorial optimization problems, including minimum s-t cuts in graphs and matroid intersection. It is well-kn...
详细信息
ISBN:
(纸本)9781665420556
The problem of minimizing a submodularfunction (SFM) is a common generalization of several fundamental combinatorial optimization problems, including minimum s-t cuts in graphs and matroid intersection. It is well-known that a submodularfunction can be minimized with only poly(N) function evaluation queries where N denotes the universe size. However, all known polynomial query algorithms for SFM are highly adaptive, requiring at least N rounds of adaptivity. A natural question is if SFM can be efficiently solved in a highly parallel manner, namely, with poly(N) queries using only poly-logarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed to solve SFM efficiently was taken in the very recent work of Balkanski and Singer who showed that any SFM algorithm with poly(N) queries. This left open the possibility of efficient SFM algorithms with poly-logarithmic rounds of adaptivity. In this work, we strongly rule out this possibility by showing that any, possibly randomized, algorithm for submodular function minimization making poly(N) queries requires (Omega) over tilde (N-1/3) rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make up to 2(N1-delta) queries, for any constant d > 0.
We provide a generic technique for constructing families of submodularfunctions to obtain lower bounds for submodular function minimization (SFM). Applying this technique, we prove that any deterministic SFM algorith...
详细信息
ISBN:
(纸本)9781665455190
We provide a generic technique for constructing families of submodularfunctions to obtain lower bounds for submodular function minimization (SFM). Applying this technique, we prove that any deterministic SFM algorithm on a ground set of n elements requires at least Omega(n log n) queries to an evaluation oracle. This is the first super-linear query complexity lower bound for SFM and improves upon the previous best lower bound of 2n given by [Graur et al., ITCS 2020]. Using our construction, we also prove that any (possibly randomized) parallel SFM algorithm, which can make up to poly(n) queries per round, requires at least Omega(n/ log n) rounds to minimize a submodularfunction. This improves upon the previous best lower bound of (Omega) over tilde (n(1/3)) rounds due to [Chakrabarty et al., FOCS 2021], and settles the parallel complexity of query-efficient SFM up to logarithmic factors due to a recent advance in [Jiang, SODA 2021]. Index
We present a new class of polynomial-time algorithms for submodular function minimization (SFM) as well as a unified framework to obtain strongly polynomial SFM algorithms. Our algorithms are based on simple iterative...
详细信息
We present a new class of polynomial-time algorithms for submodular function minimization (SFM) as well as a unified framework to obtain strongly polynomial SFM algorithms. Our algorithms are based on simple iterative methods for the minimum-norm problem, such as the conditional gradient and Fujishige-Wolfe algorithms. We exhibit two techniques to turn simple iterative methods into polynomial-time algorithms. First, we adapt the geometric rescaling technique, which has recently gained attention in linear programming, to SFM and obtain a weakly polynomial bound O((n(4) center dot EO + n(5))log(nL)). Second, we exhibit a general combinatorial black box approach to turn EL-approximate SFM oracles into strongly polynomial exact SFM algorithms. This framework can be applied to a wide range of combinatorial and continuous algorithms, including pseudo-polynomial ones. In particular, we can obtain strongly polynomial algorithms by a repeated application of the conditional gradient or of the Fujishige-Wolfe algorithm. Combined with the geometric rescaling technique, the black box approach provides an O((n(5) center dot EO + n(6))log(2) n) algorithm. Finally, we show that one of the techniques we develop in the paper can also be combined with the cutting-plane method of Lee et al., yielding a simplified variant of their O(n(3)log(2) n center dot EO + n(4)log(O(1))n) algorithm.
Recently Dadush et al. (2017) have devised a polynomial submodular function minimization (SFM) algorithm based on their LP algorithm. In the present note we also show a weakly polynomial algorithm for SFM based on the...
详细信息
Recently Dadush et al. (2017) have devised a polynomial submodular function minimization (SFM) algorithm based on their LP algorithm. In the present note we also show a weakly polynomial algorithm for SFM based on the recently developed linear programming feasibility algorithm of Chubanov (2017) to stimulate further research on SFM. (C) 2019 Elsevier B.V. All rights reserved.
In this paper, we consider the non-negative submodular function minimization problem with covering type linear constraints. Assume that there exist m linear constraints, and we denote by the number of non-zero coeffic...
详细信息
In this paper, we consider the non-negative submodular function minimization problem with covering type linear constraints. Assume that there exist m linear constraints, and we denote by the number of non-zero coefficients in the ith constraints. Furthermore, we assume that . For this problem, Koufogiannakis and Young proposed a polynomial-time -approximation algorithm. In this paper, we propose a new polynomial-time primal-dual approximation algorithm based on the approximation algorithm of Takazawa and Mizuno for the covering integer program with -variables and the approximation algorithm of Iwata and Nagano for the submodular function minimization problem with set covering constraints. The approximation ratio of our algorithm is , where is the maximum size of a connected component of the input submodularfunction.
We generalize the results of Angles d'Auriac, Igloi, Preissmann and Sebo [1,6] concerning graphic submodular function minimization. They use a subroutine due to Padberg and Wolsey (1983) [5] that cannot be adapted...
详细信息
We generalize the results of Angles d'Auriac, Igloi, Preissmann and Sebo [1,6] concerning graphic submodular function minimization. They use a subroutine due to Padberg and Wolsey (1983) [5] that cannot be adapted to hypergraphs. Instead of their approach we consider fractional orientations and the push-relabel method Goldberg and Tarjan (1988) [4]. The complexity of the resulting algorithm in a hypergraph is O(n(4) + n(3)m), where n, m are the number of nodes and hyperedges, respectively. (C) 2013 Elsevier B.V. All rights reserved.
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the mea...
详细信息
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we connect the model selection problem with structure-promoting regularizers to submodular function minimization with continuous and discrete arguments. In particular, we leverage the theory of submodularfunctions to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes, and extend these ideas to a robust optimization framework. We also show how some problems outside of this class can be embedded into the class, further extending the class of problems our framework can accommodate. Finally, we numerically validate our theoretical results with several pro of-of-concept examples with synthetic and real-world data, comparing against state-of-the-art algorithms.
In this paper we propose a novel heuristic search for solving combinatorial optimization problems which we call Diverse Search (DS). Like beam search, this constructive approach expands only a selected subset of the s...
详细信息
In this paper we propose a novel heuristic search for solving combinatorial optimization problems which we call Diverse Search (DS). Like beam search, this constructive approach expands only a selected subset of the solutions in each level of the search tree. However, instead of selecting the solutions with the best values, we use an efficient method to select a diverse subset, after filtering out uninteresting solutions. DS also distinguishes solutions that do not produce better offspring, and applies a local search process to them. The intuition is that the combination of these strategies allows to reach more-and more diverse-local optima, increasing the chances of finding the global optima. We test DS on several instances of the Koerkel-Ghosh (KG) and K-median benchmarks for the Simple Plant Location Problem. We compare it with a state-of-the-art heuristic for the KG benchmark and the relatively old POPSTAR solver, which also relies on the idea of maintaining a diverse set of solutions and, surprisingly, reached a comparable performance. With the use of a Path Relinking post-optimization step, DS can achieve results of the same quality that the state-of-the-art in similar CPU times. Furthermore, DS proved to be slightly better on average for large scale problems with small solution sizes, proving to be an efficient algorithm that delivers a set of good and diverse solutions.
We present an accelerated or "look-ahead" version of the Newton-Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman div...
详细信息
We present an accelerated or "look-ahead" version of the Newton-Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of O(m log m) iterations;the previous best bound was O(m2logm) by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O(mn) iterations. Every iteration takes O(mn) time for general 2VPI systems and O(m + n log n) time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton-Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet.
暂无评论