This paper is concerned with approximations for expensivefunction evaluation - the expensivefunctions arising in an engineering design context. The problem of reducing the computational cost of generating sufficient...
详细信息
This paper is concerned with approximations for expensivefunction evaluation - the expensivefunctions arising in an engineering design context. The problem of reducing the computational cost of generating sufficient learning samples is addressed. Several approaches of using a priori knowledge to achieve computational economy are presented. In all these, the results of a cheap model are treated as knowledge to be incorporated in the training process. Several approaches are described here: in particular, we focus on neural based systems. This approach is then developed as a new knowledge-based kriging model which is shown to be as accurate as neural based alternatives while being much easier to train. Examples from the domain of structural optimization are given to demonstrate the approach.
It is commonly believed that Bayesian optimization (BO) algorithms are highly efficient for optimizing numerically costly functions. However, BO is not often compared to widely different alternatives, and is mostly te...
详细信息
It is commonly believed that Bayesian optimization (BO) algorithms are highly efficient for optimizing numerically costly functions. However, BO is not often compared to widely different alternatives, and is mostly tested on narrow sets of problems (multimodal, low-dimensional functions), which makes it difficult to assess where (or if) they actually achieve state-of-the-art performance. Moreover, several aspects in the design of these algorithms vary across implementations without a clear recommendation emerging from current practices, and many of these design choices are not substantiated by authoritative test campaigns. This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices. The following features are considered: the size of the initial design of experiments, the functional form of the trend, the choice of the kernel, the internal optimization strategy, input or output warping, and the use of the Gaussian process (GP) mean in conjunction with the classical Expected Improvement. The experiments are carried out with the established COCO (COmparing Continuous Optimizers) software. It is found that a small initial budget, a quadratic trend, high-quality optimization of the acquisition criterion bring consistent progress. Using the GP mean as an occasional acquisition contributes to a negligible additional improvement. Warping degrades performance. The Matern 5/2 kernel is a good default but it may be surpassed by the exponential kernel on irregular functions. Overall, the best EGO variants are competitive or improve over state-of-the-art algorithms in dimensions less or equal to 5 for multimodal functions. The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN. The structure of the experiments by function groups allows to define priorities for future research on Bayesian optimization.
The use of response surface methods are well established in the global optimization of expensivefunctions, the response surface acting as a surrogate to the expensivefunction objective. In structural design however,...
详细信息
The use of response surface methods are well established in the global optimization of expensivefunctions, the response surface acting as a surrogate to the expensivefunction objective. In structural design however, the change in objective may vary little between the two models: it is more often the constraints that change with models of varying fidelity. Here approaches are described whereby the coarse model constraints are mapped so that the mapped constraints more faithfully approximate the fine model constraints. The shape optimization of a simple structure demonstrates the approach.
暂无评论