Problem transformation based optimization for large-scale multi-objective problems usually have excellent con-vergence yet insufficient diversity. Improving the distributivity of the search subspace in the decision sp...
详细信息
Evolutionary computation algorithms have been extensively studied to locate multiple global peaks in multimodal optimization problems (MMOPs). However, multitask multimodal optimization, which aims to deal with multip...
详细信息
The traditional path planning problems typically focus on a single objective, whereas in the field of Wargame, it is crucial to consider trade-offs and compromises between multiple objectives. To address this issue, w...
详细信息
Inverse optimization involves inferring unknown parameters of an optimization problem from known solutions and is widely used in fields such as transportation, power systems, and healthcare. We study the contextual in...
详细信息
Inverse optimization involves inferring unknown parameters of an optimization problem from known solutions and is widely used in fields such as transportation, power systems, and healthcare. We study the contextual inverse optimization setting that utilizes additional contextual information to better predict the unknown problem parameters. We focus on contextual inverse linear programming (CILP), addressing the challenges posed by the non-differentiable nature of LPs. For a linear prediction model, we reduce CILP to a convex feasibility problem, allowing the use of standard algorithms such as alternating projections. The resulting algorithm for CILP is equipped with a linear convergence guarantee without additional assumptions such as degeneracy or interpolation. Next, we reduce CILP to empirical risk minimization (ERM) on a smooth, convex loss that satisfies the Polyak-Lojasiewicz condition. This reduction enables the use of scalable first-order optimization methods to solve large non-convex problems while maintaining theoretical guarantees in the convex setting. Subsequently, we use the reduction to ERM to quantify the generalization performance of the proposed algorithm on previously unseen instances. Finally, we experimentally validate our approach on synthetic and real-world problems, and demonstrate improved performance compared to existing methods. Copyright 2024 by the author(s)
This paper proposes a k-means algorithm enhanced with GEO (Golden Eagle Optimizer) optimization to address the challenges encountered by traditional k-means algorithm, such as being highly sensitive to initial values ...
详细信息
This study explores the effectiveness of the traditional Firefly algorithm (FA) in optimizing the Gaussian Kernel-based Fuzzy C-means clustering (GKFCM) algorithm by adjusting 'sigma' and 'm'. We compa...
详细信息
In the rapidly developing field of information technology today, the optimization of neural network algorithms has become a common focus of attention in both academia and industry. Especially when dealing with complex...
详细信息
Leveraging on fruitful intertask knowledge transfer, multitasking evolutionary algorithms (MTEAs) exhibit superior efficiency in handling multiple optimization tasks simultaneously. In practice, it is common that the ...
详细信息
At present, with the continuous increase of Internet users in rural China, the scale of online shopping is also expanding, and the development prospect of rural e-commerce is obvious. The scattered rural population an...
详细信息
High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization. Spurred by the curse of dimensionality, a large collection of algorithms aim to make it more performant in this sett...
详细信息
High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization. Spurred by the curse of dimensionality, a large collection of algorithms aim to make it more performant in this setting, commonly by imposing various simplifying assumptions on the objective. In this paper, we identify the degeneracies that make vanilla Bayesian optimization poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of lowering the model complexity. Moreover, we propose an enhancement to the prior assumptions that are typical to vanilla Bayesian optimization, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior with the dimensionality - reveals that standard Bayesian optimization works drastically better than previously thought in high dimensions, clearly outperforming existing state-of-the-art algorithms on multiple commonly considered real-world high-dimensional tasks. Copyright 2024 by the author(s)
暂无评论