版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Kings Coll London Ctr Intelligent Informat Proc Syst CIIPS Dept Engn Kings Commun Learning & Informat Proc KCLIP Lab London WC2R 2LS England
出 版 物:《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 (IEEE Trans Signal Process)
年 卷 期:2025年第73卷
页 面:418-432页
核心收录:
基 金:European Union Open Fellowships of the EPSRC [EP/W024101/1] EPSRC [EP/X011852/1]
主 题:Optimization Costs Closed box Linear programming Bayes methods Entropy Resource management Multitasking Information theory Vectors Bayesian optimization multi-fidelity simulation entropy search knowledge transfer
摘 要:In many applications, ranging from logistics to engineering, a designer is faced with a sequence of optimization tasks for which the objectives are in the form of black-box functions that are costly to evaluate. Furthermore, higher-fidelity evaluations of the optimization objectives often entail a larger cost. Existing multi-fidelity black-box optimization strategies select candidate solutions and fidelity levels with the goal of maximizing the information about the optimal value or the optimal solution for the current task. Assuming that successive optimization tasks are related, this paper introduces a novel information-theoretic acquisition function that balances the need to acquire information about the current task with the goal of collecting information transferable to future tasks. The proposed method transfers across tasks distributions over parameters of a Gaussian process surrogate model by implementing particle-based variational Bayesian updates. Theoretical insights based on the analysis of the expected regret substantiate the benefits of acquiring transferable knowledge across tasks. Furthermore, experimental results across synthetic and real-world examples reveal that the proposed acquisition strategy that caters to future tasks can significantly improve the optimization efficiency as soon as a sufficient number of tasks is processed.