In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Coo...
详细信息
In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.
AAAI was pleased to present the AAAI-08 Workshop Program, held Sunday and Monday, July 13-14, in Chicago, Illinois, USA. The program included the following 15 workshops: Advancements in POMDP Solvers;AI Education Work...
详细信息
作者:
S. ChalupF. MaireMachine Learning Research Centre
Neurocomputing Laboratory School of Computing Science Faculty of Information Technology Queensland University of Technology Brisbane QLD Australia
This study empirically investigates variations of hill climbing algorithms for training artificial neural networks on the 5-bit parity classification task. The experiments compare the algorithms when they use differen...
详细信息
This study empirically investigates variations of hill climbing algorithms for training artificial neural networks on the 5-bit parity classification task. The experiments compare the algorithms when they use different combinations of random number distributions, variations in the step size and changes of the neural networks' initial weight distribution. A hill climbing algorithm which uses inline search is proposed. In most experiments on the 5-bit parity task it performed better than simulated annealing and standard hill climbing.
暂无评论