Traditional optimization algorithms suffer performance decline in high-dimensional optimization problems, such as analog circuit design optimization. Adapting existing algorithms to parallel computing environments is ...
详细信息
ISBN:
(纸本)9798350352047;9798350352030
Traditional optimization algorithms suffer performance decline in high-dimensional optimization problems, such as analog circuit design optimization. Adapting existing algorithms to parallel computing environments is a critical challenge. Therefore we propose a parallel Trust Region bayesianoptimization(TuRBO) algorithm. This algorithm operates in parallel on different trust regions, utilizing a Multi-Armed Bandit algorithm for intelligent sampling to accelerate parameter optimization. Circuit experimental results demonstrate the advantages of this algorithm. Compared to Differential Evolution, Particle Swarm optimization, Naive bayesian, High-Dimensional Batch bayesian Processing, and TuRBO algorithms, the circuit performance achieves improvements ranging from 3.7% to 98.2%. Compared to TuRBO, it achieves acceleration ratios in terms of iteration numbers ranging from 1.19x to 1.31x, and in terms of algorithm runtime ranging from 1.21x to 2.25x.
In practice, objective functions of real-time control systems can have multiple local minimums or can dramatically change over the function space, making them hard to optimize. To efficiently optimize such systems, in...
详细信息
In practice, objective functions of real-time control systems can have multiple local minimums or can dramatically change over the function space, making them hard to optimize. To efficiently optimize such systems, in this paper, we develop a parallel global optimization framework that combines direct search methods with parallel bayesian optimization. It consists of an iterative global and local search that searches broadly through the entire global space for promising regions and then efficiently exploits each local promising region. We prove the asymptotic convergence properties of the proposed framework and conduct several numerical experiments to illustrate its empirical performance. We also provide a real-time control problem to illustrate the efficiency of our proposed *** to Practitioners-This work is motivated by a collision avoidance problem of vessels aided with onboard agent-based simulations. The simulation on one vessel can predict potential conflicts with other vessels on a pre-defined trajectory. In heavy congestion regions, the environment is highly dynamic and thus it is difficult to find a much safer alternative trajectory if collision is predicted on the current one. Moreover, for such real-time decisions, the control system should be quick in response to improve safety. The proposed metamodel based algorithm is designed for quick decision in such highly dynamic systems. The algorithm employs a decomposition of the response surface to better handle the multi-modal surface resulting from the highly dynamic environment. Specifically, it first looks at the large-scale trend globally (filter out the many local fluctuations that may otherwise trap the algorithm) to locate potential promising regions and then proceeds to this local regions for more detailed local search. To make quick decisions, it uses fast direct search algorithms in the local search phase and applies a parallel search scheme to enjoy the abundant computing power. Both the
Machine learning (ML) classifiers are widely adopted in the learning-enabled components of intelligent Cyber-physical Systems (CPS) and tools used in designing integrated circuits. Due to the impact of the choice of h...
详细信息
ISBN:
(纸本)9781665416214
Machine learning (ML) classifiers are widely adopted in the learning-enabled components of intelligent Cyber-physical Systems (CPS) and tools used in designing integrated circuits. Due to the impact of the choice of hyperparameters on an ML classifier performance, hyperparameter tuning is a crucial step for application success. However, the practical adoption of existing hyperparameter tuning frameworks in production is hindered due to several factors such as inflexible architecture, limitations of search algorithms, software dependencies, or closed source nature. To enable state-of-the-art hyperparameter tuning in production, we propose the design of a lightweight library (1) having a flexible architecture facilitating usage on arbitrary systems, and (2) providing paralleloptimization algorithms supporting mixed parameters (continuous, integer, and categorical), handling runtime failures, and allowing combined classifier selection and hyperparameter tuning (CASH). We present Mango, a black-box optimization library, to realize the proposed design. Mango is currently used in production at Arm for more than 25 months and is available open-source (https://***/ARM-software/mango). Our evaluation shows that Mango outperforms other black-box optimization libraries in tuning hyperparameters of ML classifiers having mixed parameter search spaces. We discuss two use cases of Mango deployed in production at Arm, highlighting its flexible architecture and ease of adoption. The first use case trains ML classifiers on the Dask cluster using Mango to find bugs in Arm's integrated circuits designs. As a second use case, we introduce an AutoML framework deployed on the Kubernetes cluster using Mango. Finally, we present the third use-case of Mango in enabling neural architecture search (NAS) to transfer deep neural networks to TinyML platforms (microcontroller class devices) used by CPS/IoT applications.
暂无评论