estimation of distribution algorithms (EDA) are population-based meta-heuristics that use probabilistic models to describe the relationships between variables of a problem. From these models, new solutions can be samp...
详细信息
estimation of distribution algorithms (EDA) are population-based meta-heuristics that use probabilistic models to describe the relationships between variables of a problem. From these models, new solutions can be sampled without disruption of Building Blocks, leading to a more effective search. The key aspect of EDAs is the model-building step and better models should enable a better exploration of the search space. However, high-quality models require computationally expensive algorithms, resulting in too costly EDAs. This paper proposes an efficient implementation of the tournament selection algorithm (TS), called Compressed Tournament Selection (CTS). CTS avoids the insertion of repeated solutions in the population during the TS. Therefore, model-building can be performed on a reduced population (equivalent to the original), improving the EDAs' performance as a whole. In the experiments the method was applied to the Extended Compact Genetic Algorithm and the results showed that by using an appropriate tournament size, CTS can lead to high speed-ups without a decrease of model quality. Besides, CTS keeps the characteristics of TS and can be easily used to enhance the efficiency of any EDA based on the tournament selection algorithm.
This article introduces the Coincidence Algorithm (COIN) to solve several multimodal puzzles. COIN is an algorithm in the category of estimation of distribution algorithms (EDAs) that makes use of probabilistic models...
详细信息
This article introduces the Coincidence Algorithm (COIN) to solve several multimodal puzzles. COIN is an algorithm in the category of estimation of distribution algorithms (EDAs) that makes use of probabilistic models to generate solutions. The model of COIN is a joint probability table of adjacent events (coincidence) derived from the population of candidate solutions. A unique characteristic of COIN is the ability to learn from a negative sample. Various experiments show that learning from a negative example helps to prevent premature convergence, promotes diversity and preserves good building blocks.
The two key operators in estimation of distribution algorithms (EDAs) are estimating the distribution model according to the selected population and sampling new individuals from the estimated model. Copula EDA introd...
详细信息
The two key operators in estimation of distribution algorithms (EDAs) are estimating the distribution model according to the selected population and sampling new individuals from the estimated model. Copula EDA introduces the copula theory into EDA. The copula theory provides the theoretical basis and the way to separate the multivariate joint distribution probability function into a function called copula and the univariate margins. The estimation operator and the sampling operator in copula EDA are discussed in this paper, and three exchangeable Archimedean copulas are used in copula EDA. The experimental results show that the three copula EDAs perform equivalently to some classical EDAs.
estimation of distribution algorithms (abbr. EDAs) is a relatively new branch of evolutionary algorithms. EDAs replace search operators with the estimation of the distribution of selected individuals + sampling from t...
详细信息
estimation of distribution algorithms (abbr. EDAs) is a relatively new branch of evolutionary algorithms. EDAs replace search operators with the estimation of the distribution of selected individuals + sampling from the population. In an EDAs, this explicit representation of the population is replaced with a probability distribution over the choices available at each position in the vector that represents a population member. In this paper, an estimation of distribution learning framework and the corresponding learning algorithm are proposed and the relevant properties of the framework are analysed on the basis of probability. The framework provides a basis and a principle criterion for designing and analysing evolutionary learning algorithms based on EDAs. The probability is the core tool of EDAs. EDA-based learning algorithms are required to estimate the population distribution by the sample distributions. The learning framework proposed can guide and regulate the design processes of learning algorithms and strategies based on EDAs. The framework involved in relevant learning problems is analysed from the perspectives of probability by properties analysis, proof and verification. The experiment results show that the framework proposed is feasible for realising learning from datasets and has better learning performances than some other relevant evolutionary learning methods.
Different computation modes of agents take on relative uniform characteristic in all kinds of intelligent algorithms, though they usually have different extrinsic forms. In this paper, formalization description of est...
详细信息
ISBN:
(纸本)9781424421138
Different computation modes of agents take on relative uniform characteristic in all kinds of intelligent algorithms, though they usually have different extrinsic forms. In this paper, formalization description of estimation of distribution algorithms is proposed;and the general framework description of estimation of distribution algorithms based on intelligent computation framework idea is discussed and validated based on the uniform framework description of intelligent computation abstracted from all kinds of intelligent computation modes.
In order to solve optimization problems in large scale networked systems, this paper proposes a method to implement estimation of distribution algorithms (EDA) in a decentralized way. The main point of decentralized E...
详细信息
In order to solve optimization problems in large scale networked systems, this paper proposes a method to implement estimation of distribution algorithms (EDA) in a decentralized way. The main point of decentralized EDA is that each subsystem solves its own optimization problems based on local and its neighbors' information. Numerical examples illustrate the effectiveness of the algorithm.
The estimation of distribution algorithms (EDAs) is a novel class of evolutionary algorithms which is motivated by the idea of building probabilistic graphical model of promising solutions to represent linkage inf...
详细信息
ISBN:
(纸本)9781457715860
The estimation of distribution algorithms (EDAs) is a novel class of evolutionary algorithms which is motivated by the idea of building probabilistic graphical model of promising solutions to represent linkage information between variables in chromosome. Through learning of and sampling from probabilistic graphical model, new population is generated and optimization procedure is repeated until the stopping criteria are *** this paper, the mechanism of the estimation of distribution algorithms is *** existing EDAs are surveyed and categorized according to the probabilistic model they used,then the strengths and weakness and the future perspective of EDAs are concluded.
Different computation modes of agents take on relative uniform characteristic in all kinds of intelligent algorithms, though they usually have different extrinsic forms. In this paper, formalization description of est...
详细信息
Different computation modes of agents take on relative uniform characteristic in all kinds of intelligent algorithms, though they usually have different extrinsic forms. In this paper, formalization description of estimation of distribution algorithms is proposed;and the general framework description of estimation of distribution algorithms based on intelligent computation framework idea is discussed and validated based on the uniform framework description of intelligent computation abstracted from all kinds of intelligent computation modes.
estimation of distribution algorithms are a recent new meta-heuristic used in Genetics-Based Machine Learning to solve combinatorial and continuous optimization problems. One of the distinctive features of this family...
详细信息
ISBN:
(纸本)9781605581316
estimation of distribution algorithms are a recent new meta-heuristic used in Genetics-Based Machine Learning to solve combinatorial and continuous optimization problems. One of the distinctive features of this family of algorithms is that the search for the optimum is performed within a candidate space of probability distributions associated to the problem rather than over the population of possible solutions. A framework based on Information Geometry [3] is applied in this paper to propose a geometrical interpretation of the different operators used in EDAs and provide a better understanding of the underlying behavior of this family of algorithms from a novel point of view. The analysis carried out and the simple examples introduced show the importance of the boundary of the statistical model w.r.t. the distributions and EDA may converge to.
Recent research into single-objective continuous estimation-of-distributionalgorithms (EDAs) has shown that when 7 maximum-likelihood estimations are used for parametric distributions such as the normal distribution,...
详细信息
ISBN:
(纸本)9781595936974
Recent research into single-objective continuous estimation-of-distributionalgorithms (EDAs) has shown that when 7 maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we argue that the same holds for multi-objective optimization. Our aim in this paper is to transfer a solution called Adaptive Variance Scaling (AVS) from the single-objective case to the multi-objective case. To this end, we zoom in on an existing EDA for continuous multi-objective optimization the MIDEA, which employs mixture distributions. We. propose a means to combine AVS with the normal mixture distribution, as opposed to the single normal distribution for which AVS was introduced. Tit addition, we improve the AVS scheme, using the Standard-Deviation Ratio (SDR.) trigger. Intuitively put, variance scaling is triggered by the SDR, trigger only if improvements are found to be far away front the mean. For the multi-objective case, this addition is important to keep the variance from bent, scaled to excessively, large values. From experiments performed oil five well-known benchmark problems. the addition of SDR and AVS is found to enlarge the class of problems that continuous multi-objective EDAs can solve reliably.
暂无评论