selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally us...
详细信息
selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for theLeadingOnesbenchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the "simple" Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. ForLeadingOneswe prove that theGeneralised Random Gradient (GRG)HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access toklow-level local search heuristics, it outperforms the best-possible algorithm using any subset of thekheuristics. Finally, we show that the advantages of GRG over Random
The majority of scheduling metaheuristics use indirect representation of solutions as a way to efficiently explore the search space. Thus, a crucial part of such metaheuristics is a "schedule generation scheme&qu...
详细信息
ISBN:
(纸本)9783319694047;9783319694030
The majority of scheduling metaheuristics use indirect representation of solutions as a way to efficiently explore the search space. Thus, a crucial part of such metaheuristics is a "schedule generation scheme" - procedure translating the indirect solution representation into a schedule. Schedule generation scheme is used every time a new candidate solution needs to be evaluated. Being relatively slow, it eats up most of the running time of the metaheuristic and, thus, its speed plays significant role in performance of the metaheuristic. Despite its importance, little attention has been paid in the literature to efficient implementation of schedule generation schemes. We give detailed description of serial schedule generation scheme, including new improvements, and propose a new approach for speeding it up, by using Bloom filters. The results are further strengthened by automated control of parameters. Finally, we employ online algorithm selection to dynamically choose which of the two implementations to use. This hybrid approach significantly outperforms conventional implementation on a wide range of instances.
暂无评论