作者:
Goerigk, MarcKurtz, JannisUniv Passau
Business Decis & Data Sci Dr Hans Kapfinger Str 30 D-94032 Passau Germany Univ Amsterdam
Amsterdam Business Sch Plantage Muidergracht 12 NL-1018 TV Amsterdam Netherlands
We study iterative constraint and variable generation methods for (two-stage) robust combinatorial optimization problems with discrete uncertainty. The goal of this work is to find a set of starting scenarios that pro...
详细信息
We study iterative constraint and variable generation methods for (two-stage) robust combinatorial optimization problems with discrete uncertainty. The goal of this work is to find a set of starting scenarios that provides strong lower bounds early in the process. To this end we define the Relevant Scenario Recognition Problem (RSRP) which finds the optimal choice of scenarios which maximizes the corresponding objective value. We show for classical and two-stage robust optimization that this problem can be solved in polynomial time if the number of selected scenarios is constant and NP-hard if it is part of the input. Furthermore, we derive a linear mixed-integer programming formulation for the problem in both cases. Since solving the RSRP is not possible in reasonable time, we propose a machine-learning-based heuristic to determine a good set of starting scenarios. To this end, we design a set of dimension-independent features, and train a Random Forest Classifier on already solved small-dimensional instances of the problem. Our experiments show that our method is able to improve the solution process even for larger instances than contained in the training set, and that predicting even a small number of good starting scenarios can considerably reduce the optimality gap. Additionally, our method provides a feature importance score which can give new insights into the role of scenario properties in robust optimization.
Leveraging machinelearning to facilitate the optimization process is an emerging field that holds the promise to bypass the fundamental computational bottleneck caused by classic iterative solvers in critical applica...
详细信息
Leveraging machinelearning to facilitate the optimization process is an emerging field that holds the promise to bypass the fundamental computational bottleneck caused by classic iterative solvers in critical applications requiring near-real-time optimization. The majority of existing approaches focus on learning data-driven optimizers that lead to fewer iterations in solving an optimization. In this paper, we take a different approach and propose to replace the iterative solvers altogether with a trainable parametric set function, that outputs the optimal arguments/parameters of an optimization problem in a single feed forward. We denote our method as learning to Optimize the optimization Process (LOOP). We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems including linear/nonlinear regression, principal component analysis, transport-based coreset, and quadratic programming in supply management applications. In addition, we propose two alternative approaches for learning such parametric functions, with and without a solver in the LOOP . Finally, through various numerical experiments, we show that the trained solvers could be orders of magnitude faster than the classic iterative solvers while providing near optimal solutions.
暂无评论