The optimization of decentralized energy systems is an important practical problem that can be modeled using stochastic programs and solved via their large-scale, deterministic-equivalent formulations. Unfortunately, ...
详细信息
Designing/configuring ensemble Differential Evolution (DE) algorithms with complementary search characteristics is a complex problem requiring both in-depth understanding of the constituent algorithm's dynamics an...
详细信息
ISBN:
(纸本)9783031451690;9783031451706
Designing/configuring ensemble Differential Evolution (DE) algorithms with complementary search characteristics is a complex problem requiring both in-depth understanding of the constituent algorithm's dynamics and tacit knowledge. This paper proposes a Grammatical Evolution (GE) based automatedconfiguration of a recent ensemble DE algorithm - Improved Multi-population Ensemble Differential Evolution (IMPEDE). A Backus Naur Form grammar, with nine ensemble and DE parameters, has been designed to represent all possible IMPEDE configurations. The proposed approach has been employed to evolve IMPEDE configurations that solve CEC'17 benchmark optimization problems. The evolved configurations have been validated on CEC'14 suite and a real-world optimization problem - economic load dispatch (ELD) problem from CEC'11 suite. The simulation experiments demonstrate that the proposed approach is capable of evolving IMPEDE configurations that exhibit statistically superior or comparable performance against the manual configuration of IMPEDE as well as against other prominent ensemble DE algorithms.
Most optimization problems of practical significance are typically solved by highly configurable parameterized *** achieve the best performance on a problem instance,a trial-and-error configuration process is required...
详细信息
Most optimization problems of practical significance are typically solved by highly configurable parameterized *** achieve the best performance on a problem instance,a trial-and-error configuration process is required,which is very costly and even prohibitive for problems that are already computationally intensive,*** problems associated with machine learning *** the past decades,many studies have been conducted to accelerate the tedious configuration process by learning from a set of training *** article refers to these studies as learn to optimize and reviews the progress achieved.
Despite their great success in recent years, neural networks have been found to be vulnerable to adversarial attacks. These attacks are often based on slight perturbations of given inputs that cause them to be misclas...
详细信息
Despite their great success in recent years, neural networks have been found to be vulnerable to adversarial attacks. These attacks are often based on slight perturbations of given inputs that cause them to be misclassified. Several methods have been proposed to formally prove robustness of a given network against such attacks. However, these methods typically give rise to high computational demands, which severely limit their scalability. Recent state-of-the-art approaches state the verification task as a minimisation problem, which is formulated and solved as a mixed-integer linear programming (MIP) problem. We extend this approach by leveraging automated algorithm configuration techniques and, more specifically, construct a portfolio of MIP solver configurations optimised for the neural network verification task. We test this approach on two recent, state-of-the-art MIP-based verification engines, MIPVerify and Venus, and achieve substantial improvements in CPU time by average factors of up to 4.7 and 10.3, respectively.
Automatic algorithmconfiguration procedures aim at supporting the design and application of optimization algorithms by providing specialized tools to automatically adjust their parameters to use the available computa...
详细信息
ISBN:
(纸本)9798400701207
Automatic algorithmconfiguration procedures aim at supporting the design and application of optimization algorithms by providing specialized tools to automatically adjust their parameters to use the available computational resources effectively. The irace configurator is a state-of-the-art method implementing an iterated racing procedure that, in its current implementation, allows parallel executions of the target algorithm. Parallel evaluation is crucial for the efficient use of computational resources, reducing the wall-clock time that the configuration process needs to find a good configuration. In this work, we propose an alternative to irace that performs a single race instead of iterative races. The continuously racing configurator (crace) evaluates, removes and generates new configurations asynchronously, granting a high level of flexibility regarding the configuration process when compared to the previous iterative scheme. In this paper, we provide a general description of the crace configuration procedure and perform initial exploratory experiments on five configuration scenarios. These experiments focus on evaluating the effect of the minimum number of configurations required to be involved in the race and the number of parallel evaluations. The first results are encouraging, showing that crace can be a competitive procedure on the scenarios evaluated.
This paper summarizes our work "automatedconfiguration of Genetic algorithms by Tuning for Anytime Performance", to appear in IEEE Transactions on Evolutionary Computation. Finding the best configuration of...
详细信息
ISBN:
(纸本)9781450392686
This paper summarizes our work "automatedconfiguration of Genetic algorithms by Tuning for Anytime Performance", to appear in IEEE Transactions on Evolutionary Computation. Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation. We compare in our work the results of four different hyperparameter optimization approaches for a family of genetic algorithms on 25 diverse pseudo-Boolean optimization problems. More precisely, we compare previously obtained results from a grid search with those obtained from three automatedconfiguration techniques: iterated racing, mixed-integer parallel efficient global optimization, and mixed-integer evolutionary strategies. Using two different cost metrics, expected running time and the area under the empirical cumulative distribution function curve, we find that in several cases the best configurations with respect to expected running time are obtained when using the area under the empirical cumulative distribution function curve as the cost metric during the configuration process. Our results suggest that even when interested in expected running time performance, it might be preferable to use anytime performance measures for the configuration task. We also observe that tuning for expected running time is much more sensitive with respect to the budget that is allocated to the target algorithms.
With the goal of devising algorithms for decision support in operational tasks, we introduce a new methodology for the automatedconfiguration of algorithms for combinatorial online optimization problems. The procedur...
详细信息
With the goal of devising algorithms for decision support in operational tasks, we introduce a new methodology for the automatedconfiguration of algorithms for combinatorial online optimization problems. The procedure draws upon available instance data and is capable of recognizing data patterns which prove beneficial to the overall outcome. Since online optimization requires repetitive decision making without complete future information, no online algorithm can be optimal for every instance and it is reasonable to restrict attention to rule-based algorithms. We consider such algorithms in the form of procedures which derive their decisions using a threshold value. Threshold values are computed by evaluating a mathematical term (threshold value expression) composed of the available instance data elements. The goal then consists of determining the structure of the threshold value expression leading to the best algorithm performance. To this end, we employ a simulated annealing scheme returning the most favorable term composition given the available instance data. The resulting methodology can be implemented as part of data-driven decision support systems in order to facilitate knowledge-based decision making. Decision rules are generated in an automated fashion once historical input data is provided. The methodology is successfully instantiated in a series of computational experiments for three classes of combinatorial online optimization problems (scheduling, packing, lot sizing). Results show that automatically configured online algorithms are even capable of substantially outperforming well-known online algorithms in respective problem settings. We attribute this effect to the methodology's capability of integrating instance data into the process of algorithmconfiguration.
We propose a novel method for automatedalgorithm selection in the domain of single-objective continuous black-box optimization. In contrast to existing methods, we use convolutional neural networks as the selection a...
详细信息
ISBN:
(纸本)9781728190488
We propose a novel method for automatedalgorithm selection in the domain of single-objective continuous black-box optimization. In contrast to existing methods, we use convolutional neural networks as the selection apparatus which bases its decision on a so-called 'fitness map'. This fitness map is a 2D representation of a two dimensional search space where different gray scales indicate the quality of found solutions in certain areas. Our devised approach uses a modular CMA-ES framework which offers the option to create the conventional CMA-ES, CMA-ES with the alternate step-size adaptation and many other variants proposed over the years. In total, 4 608 different configurations are possible where most configurations are of complementary nature. In this proof-of-concept work, we consider a subset of 32 possible configurations. The developed method is evaluated against an excerpt of BBOB functions and its performance is compared against baselines that are commonly used in automatedalgorithm selection - the best standalone algorithm (configuration) and the best obtainable sequence of configurations. While the results indicate that the use of the fitness map is not superior on every benchmark problem, it indubitably shows its merit on more hard-to-solve problems. This offers a promising perspective for generalizing to other types of optimization problems and problem domains.
algorithms often have tunable parameters that impact performance metrics such as runtime and solution quality. For many algorithms used in practice, no parameter settings admit meaningful worst-case bounds, so the par...
详细信息
ISBN:
(纸本)9781450380539
algorithms often have tunable parameters that impact performance metrics such as runtime and solution quality. For many algorithms used in practice, no parameter settings admit meaningful worst-case bounds, so the parameters are made available for the user to tune. Alternatively, parameters may be tuned implicitly within the proof of a worst-case guarantee. Worst-case instances, however, may be rare or nonexistent in practice. A growing body of research has demonstrated that data-driven algorithm design can lead to significant improvements in performance. This approach uses a training set of problem instances sampled from an unknown, application-specific distribution and returns a parameter setting with strong average performance on the training set. We provide a broadly applicable theory for deriving generalization guarantees that bound the difference between the algorithm's average performance over the training set and its expected performance on the unknown distribution. Our results apply no matter how the parameters are tuned, be it via an automated or manual approach. The challenge is that for many types of algorithms, performance is a volatile function of the parameters: slightly perturbing the parameters can cause a large change in behavior. Prior research (e.g., Gupta and Roughgarden, SICOMP'17;Balcan et al., COLT'17, ICML'18, EC'18) has proved generalization bounds by employing case-by-case analyses of greedy algorithms, clustering algorithms, integer programming algorithms, and selling mechanisms. We uncover a unifying structure which we use to prove extremely general guarantees, yet we recover the bounds from prior research. Our guarantees, which are tight up to logarithmic factors in the worst case, apply whenever an algorithm's performance is a piecewise-constant, -linear, or-more generally-piecewise-structured function of its parameters. Our theory also implies novel bounds for voting mechanisms and dynamic programming algorithms from computational bio
We present an extensive study of methods for exactly solving stochastic constraint (optimisation) problems (SCPs) in network analysis. These problems are prevalent in science, governance and industry. The first method...
详细信息
We present an extensive study of methods for exactly solving stochastic constraint (optimisation) problems (SCPs) in network analysis. These problems are prevalent in science, governance and industry. The first method we study is generic and decomposes stochastic constraints into a multitude of smaller local constraints that are solved using a constraint programming (CP) or mixed-integer programming (MIP) solver. However, many SCPs are formulated on probability distributions with a monotonic property, meaning that adding a positive decision to a partial solution to the problem cannot cause a decrease in solution quality. The second method is specifically designed for solving global stochastic constraints on monotonic probability distributions (SCMDs) in CP. Both methods use knowledge compilation to obtain a decision diagram encoding of the relevant probability distributions, where we focus on ordered binary decision diagrams (OBDDs). We discuss theoretical advantages and disadvantages of these methods and evaluate them experimentally. We observed that global approaches to solving SCMDs outperform decomposition approaches from CP, and perform complementarily to MIPbased decomposition approaches, while scaling much more favourably with instance size. Both methods have many alternative design choices, as both knowledge compilation and constraint solvers are used in a single pipeline. To identify which configurations work best, we apply programming by optimisation. Specifically, we show how an automatedalgorithm configurator can be used to find optimised configurations of our pipeline. After configuration, our global SCMD solving pipeline outperforms its closest competitor (a MIPbased decomposition pipeline) on all test sets we considered by up to two orders of magnitude in terms of PAR10 scores. (c) 2021 The Author(s). Published by Elsevier B.V.
暂无评论