In this paper, a Pareto-archived estimation-of-distribution algorithm (PAEDA) is presented for the multiobjective resource-constrained project scheduling problem with makespan and resource investment criteria. First, ...
详细信息
In this paper, a Pareto-archived estimation-of-distribution algorithm (PAEDA) is presented for the multiobjective resource-constrained project scheduling problem with makespan and resource investment criteria. First, by combining the activity list and the resource list, an encoding scheme named activity-resource list is presented. Second, a novel hybrid probability model is designed to predict the most promising activity permutation and resource capacities. Third, a new sampling and updating mechanism for the probability model is developed to track the area with promising solutions. In addition, a Pareto archive is used to store the nondominated solutions that have been explored, and another archive is used to store the solutions for updating the probability model. The evolution process of the PAEDA is visualized showing the most promising area of the search space is tracked. Extensive numerical testing results then demonstrate that the PAEDA outperforms the existing methods.
In the context of black-box optimization, black-box complexity is used for understanding the inherent difficulty of a given optimization problem. Central to our understanding of nature-inspired search heuristics in th...
详细信息
In the context of black-box optimization, black-box complexity is used for understanding the inherent difficulty of a given optimization problem. Central to our understanding of nature-inspired search heuristics in this context is the notion of unbiasedness. Specialized black-box complexities have been developed in order to better understand the limitations of these heuristics - especially of (population-based) evolutionary algorithms (EAs). In contrast to this, we focus on a model for algorithms explicitly maintaining a probability distribution over the search space: so-called estimation-of-distribution algorithms (EDAs). We consider the recently introduced n-Bernoulli-lambda-EDA framework, which subsumes, for example, the commonly known EDAs PBIL, UMDA, lambda-MMAS(IB), and cGA. We show that an n-Bernoulli-lambda-EDA is unbiased if and only if its probability distribution satisfies a certain invariance property under isometric automorphisms of [0, 1](n). By restricting how an n-Bernoulli-lambda-EDA can perform an update, in a way common to many examples, we derive conciser characterizations, which are easy to verify. We demonstrate this by showing that our examples above are all unbiased. (C) 2018 Elsevier B.V. All rights reserved.
The compact Genetic algorithm (cGA), parameterized by its hypothetical population size K, offers a low-memory alternative to evolving a large offspring population of solutions. It evolves a probability distribution, b...
详细信息
ISBN:
(纸本)9783031700705;9783031700712
The compact Genetic algorithm (cGA), parameterized by its hypothetical population size K, offers a low-memory alternative to evolving a large offspring population of solutions. It evolves a probability distribution, biasing it towards promising samples. For the classical benchmark ONEMAX, the cGA has two different modes of operation: a conservative one with small step sizes Theta(1/(root n log n)), which is slow but prevents genetic drift, and an aggressive one with large step sizes Theta(1/ log n), in which genetic drift leads to wrong decisions, but those are corrected efficiently. On ONEMAX, an easy hill-climbing problem, both modes lead to optimization times of Theta(n log n) and are thus equally efficient. In this paper we study how both regimes change when we replace ONEMAX by the harder hill-climbing problem DYNAMIC BINVAL. It turns out that the aggressive mode is not affected and still yields quasi-linear runtime O(n polylog n). However, the conservative mode becomes substantially slower, yielding a runtime of Omega(n(2)), since genetic drift can only be avoided with smaller step sizes of O(1/n). We complement our theoretical results with simulations.
Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the be...
详细信息
Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established heuristics. We demonstrate this approach on the recently proposed DLB benchmark. Our finding that the unary unbiased black-box complexity is only O (n2) suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. We also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms. We therefore shift our attention to algorithms that use the information of more parents to generate new solutions and find that the significance-based compact genetic algorithm can solve the DLB problem in time O (n log n).(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons .org /licenses /by-nc -nd /4 .0/).
In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasenohrl and Sutton (GECCO 2018) showed for any k = o(n) that the compact genetic a...
详细信息
In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasenohrl and Sutton (GECCO 2018) showed for any k = o(n) that the compact genetic algorithm (cGA) with any hypothetical population size mu = Omega(ne(4k) + n(3.5+epsilon)) with high probability finds the optimum of the n-dimensional jump function with jump size k in time O(mu n(1.5) log n). We significantly improve this result for small jump sizes k <= 1/20 ln n - 1. In this case, already for mu = Omega(root n log n) boolean AND poly (n) the runtime of the cGA with high probability is only O(mu root n). For the smallest admissible values of mu, our result gives a runtime of O(n log n), whereas the previous one only shows O(n(5+epsilon)). Since it is known that the cGA with high probability needs at least Omega(mu root n) iterations to optimize the unimodal ONEMAX function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross moderate-sized valleys of low fitness at no extra cost. For large k, we show that the exponential (in k) runtime guarantee of Hasenohrl and Sutton is tight and cannot be improved, also not by using a smaller hypothetical population size. We prove that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size k. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings. To complete the picture, we show that the cGA with hypothetical population size mu = Omega(log n) with high probability needs Omega(mu root n + n log n) iterations to optimize any n-dimensional jump function. This bound was known for ONEMAX, but, as we also show, the usual domination arguments do not allow to extend lower bounds on the performance of the cGA on ONEMAX to arbitrary functions with unique optimum. As a side result, we provide a simple general metho
In their recent work, Lehre and Nguyen (2019) show that the univariate marginal distributionalgorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem....
详细信息
In their recent work, Lehre and Nguyen (2019) show that the univariate marginal distributionalgorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most lambda(n/2+2e ln n) fitness evaluations. Since an offspring population size lambda of order nlogn can prevent genetic drift, the UMDA can solve the DLB problem with O(n(2) log n) fitness evaluations. In contrast, for classic evolutionary algorithms no better runtime guarantee than O(n(3)) is known (which we prove to be tight for the (1+1) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms;such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.
With elementary means, we prove a stronger run time guarantee for the univariate marginal distributionalgorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If ...
详细信息
With elementary means, we prove a stronger run time guarantee for the univariate marginal distributionalgorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors. (C) 2020 Elsevier B.V. All rights reserved.
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also...
详细信息
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results.
estimation-of-distribution algorithms (EDAs) are randomized search heuristics that maintain a stochastic model of the solution space. This model is updated from iteration to iteration based on the quality of the solut...
详细信息
ISBN:
(纸本)9781450356183
estimation-of-distribution algorithms (EDAs) are randomized search heuristics that maintain a stochastic model of the solution space. This model is updated from iteration to iteration based on the quality of the solutions sampled according to the model. As previous works show, this short-term perspective can lead to erratic updates of the model, in particular, to bit-frequencies approaching a random boundary value. This can lead to significant performance losses. In order to overcome this problem, we propose a new EDA that takes into account a longer history of samples and updates its model only with respect to information which it classifies as statistically significant. We prove that this significance-based compact genetic algorithm (sig-cGA) optimizes the common benchmark functions ONEMAX and LEADINGONES both in O(n logn) time, a result shown for no other EDA or evolutionary algorithm so far. For the recently proposed scGA - an EDA that tries to prevent erratic model updates by imposing a bias to the uniformly distributed model - we prove that it optimizes ONEMAX only in a time exponential in the hypothetical population size 1/rho.
The Univariate Marginal distributionalgorithm (UMDA) - a popular estimation-of-distribution algorithm - is studied from a run time perspective. On the classical OneMax benchmark function on bit strings of length n, a...
详细信息
The Univariate Marginal distributionalgorithm (UMDA) - a popular estimation-of-distribution algorithm - is studied from a run time perspective. On the classical OneMax benchmark function on bit strings of length n, a lower bound of Omega(lambda + mu root n + n logn), where mu and lambda are algorithm-specific parameters, on its expected run time is proved. This is the first direct lower bound on the run time of UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation-of-distribution algorithms in general. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论