In this paper, we consider an effective search method for large scale combinatorial optimization problems, only by means of neighborhood operations, not by means of such operation as crossover in genetic algorithm. Th...
详细信息
In robust combinatorial optimization, we would like to find a solution that performs well under all realizations of an uncertainty set of possible parameter values. How we model this uncertainty set has a decisive inf...
详细信息
In robust combinatorial optimization, we would like to find a solution that performs well under all realizations of an uncertainty set of possible parameter values. How we model this uncertainty set has a decisive influence on the complexity of the corresponding robust problem. For this reason, budgeted uncertainty sets are often studied, as they enable us to decompose the robust problem into easier subproblems. We propose a variant of discrete budgeted uncertainty for cardinality-based constraints or objectives, where a weight vector is applied to the budget constraint. We show that while the adversarial problem can be solved in linear time, the robust problem becomes NP-hard and not approximable. We discuss different possibilities to model the robust problem and show experimentally that despite the hardness result, some models scale relatively well in the problem size.
We present a novel way to integrate flexible, context-dependent constraints into combinatorial optimization by leveraging Large Language Models (LLMs) alongside traditional algorithms. Although LLMs excel at interpret...
详细信息
In real-world recommendation scenarios, users engage with items through various types of behaviors. Leveraging diversified user behavior information for learning can enhance the recommendation of target behaviors (e.g...
详细信息
ISBN:
(纸本)9798400712456
In real-world recommendation scenarios, users engage with items through various types of behaviors. Leveraging diversified user behavior information for learning can enhance the recommendation of target behaviors (e.g., buy), as demonstrated by recent multi-behavior methods. The mainstream multi-behavior recommendation framework consists of two steps: fusion and prediction. Recent approaches utilize graph neural networks for multi-behavior fusion and employ multi-task learning paradigms for joint optimization in the prediction step, achieving significant success. However, these methods have limited perspectives on multi-behavior fusion, which leads to inaccurate capture of user behavior patterns in the fusion step. Moreover, when using multi-task learning for prediction, the relationship between the target task and auxiliary tasks is not sufficiently coordinated, resulting in negative information transfer. To address these problems, we propose a novel multi-behavior recommendation framework based on the combinatorial optimization perspective, named COPF. Specifically, we treat multi-behavior fusion as a combinatorial optimization problem, imposing different constraints at various stages of each behavior to restrict the solution space, thus significantly enhancing fusion efficiency (COGCN). In the prediction step, we improve both forward and backward propagation during the generation and aggregation of multiple experts to mitigate negative transfer caused by differences in both feature and label distributions (DFME). Comprehensive experiments on three real-world datasets indicate the superiority of COPF. Further analyses also validate the effectiveness of the COGCN and DFME modules. Our code is available at https://***/1918190/COPF.
Obtaining exact solutions to combinatorial optimization problems using classical computing is computationally expensive. The current tenet in the field is that quantum computers can address these problems more efficie...
详细信息
Obtaining exact solutions to combinatorial optimization problems using classical computing is computationally expensive. The current tenet in the field is that quantum computers can address these problems more efficiently. While promising algorithms require fault-tolerant quantum hardware, variational algorithms have emerged as viable candidates for near-term devices. The success of these algorithms hinges on multiple factors, with the design of the Ansatz being of the utmost importance. It is known that popular approaches such as the quantum approximate optimization algorithm (QAOA) and quantum annealing suffer from adiabatic bottlenecks, which lead to either larger circuit depth or evolution time. On the other hand, the evolution time of imaginary-time evolution is bounded by the inverse energy gap of the Hamiltonian, which is constant for most noncritical physical systems. In this work we propose an imaginary Hamiltonian variational Ansatz (iHVA) inspired by quantum imaginary-time evolution to solve the MaxCut problem. We introduce a tree arrangement of the parametrized quantum gates, enabling the exact solution of arbitrary tree graphs using the one-round iHVA. For randomly generated D-regular graphs, we numerically demonstrate that the iHVA solves the MaxCut problem with a small constant number of rounds and sublinear depth, outperforming the QAOA, which requires rounds increasing with the graph size. Furthermore, our Ansatz solves the MaxCut problem exactly for graphs with up to 24 nodes and D≤5, whereas only approximate solutions can be derived by the classical near-optimal Goemans-Williamson algorithm. We validate our simulated results with hardware demonstrations on a graph with 67 nodes.
In the quantum optimization paradigm, variational quantum algorithms face challenges with hardware-specific and instance-dependent parameter tuning, which can lead to computational inefficiencies. The promising potent...
详细信息
In the quantum optimization paradigm, variational quantum algorithms face challenges with hardware-specific and instance-dependent parameter tuning, which can lead to computational inefficiencies. The promising potential of parameter transferability across problem instances with similar local structures has been demonstrated in the context of the quantum approximate optimization algorithm. In this paper we build on these advancements by extending the concept to annealing-based protocols, employing Bayesian optimization to design robust quasiadiabatic schedules. Our study reveals that, for maximum independent set problems on graph families with shared geometries, optimal parameters naturally concentrate, enabling efficient transferability between similar instances and from smaller to larger ones. Experimental results on the Orion Alpha platform validate the effectiveness of our approach, scaling to problems with up to 100 qubits. We apply this method to address a smart-charging optimization problem on a real dataset. These findings highlight a scalable, resource-efficient path for hybrid optimization strategies applicable in real-world scenarios.
Advances in quantum algorithms suggest a tentative scaling advantage on certain combinatorial optimization problems. Recent work, however, has also reinforced the idea that barren plateaus render variational algorithm...
详细信息
Advances in quantum algorithms suggest a tentative scaling advantage on certain combinatorial optimization problems. Recent work, however, has also reinforced the idea that barren plateaus render variational algorithms ineffective on large Hilbert spaces. Hence, finding annealing protocols by variation ultimately appears to be difficult. Similarly, the adiabatic theorem fails on hard problem instances with first-order quantum phase transitions. Here we show how to use the spin coherent-state path integral to shape the geometry of quantum adiabatic evolution, leading to annealing protocols at polynomial overhead that provide orders-of-magnitude improvements in the probability to measure optimal solutions, relative to linear protocols. These improvements are not obtained on a controllable toy problem but on randomly generated hard instances (Sherrington-Kirkpatrick and maximum 2-satisfiability), making them generic and robust. Our method works for large systems and may thus be used to improve the performance of state-of-the-art quantum devices.
Quantum annealing (QA) is a novel type of analog computation that aims to use quantum mechanical fluctuations to search for optimal solutions of Ising problems. QA in the transverse Ising model, implemented on D-Wave ...
详细信息
Quantum annealing (QA) is a novel type of analog computation that aims to use quantum mechanical fluctuations to search for optimal solutions of Ising problems. QA in the transverse Ising model, implemented on D-Wave quantum processing units, are available as cloud computing resources. In this study we report concise benchmarks across three generations of D-Wave quantum annealers, consisting of four different devices, for the NP-hard discrete combinatorial optimization problems unweighted maximum clique and unweighted maximum cut on random graphs. The Ising, or equivalently quadratic unconstrained binary optimization, formulation of these problems do not require auxiliary variables for order reduction, and their overall structure and weights are not highly variable, which makes these problems simple test cases to understand the sampling capability of current D-Wave quantum annealers. All-to-all minor embeddings of size 52, with relatively uniform chain lengths, are used for a direct comparison across the Chimera, Pegasus, and Zephyr device topologies. A grid-search over annealing times and the minor embedding chain strengths is performed in order to determine the level of reasonable performance for each device and problem type. Experiment metrics that are reported are approximation ratios for non-broken chain samples, chain break proportions, and time-to-solution for the maximum clique problem instances. How fairly the quantum annealers sample optimal maximum cliques, for instances which contain multiple maximum cliques, is quantified using entropy of the measured ground state distributions. The newest generation of quantum annealing hardware, which has a Zephyr hardware connectivity, performed the best overall with respect to approximation ratios and chain break frequencies.
In this paper, a Q-learning based hyper-heuristic with clustering strategy (QHH/CS) is proposed for combinatorial optimization problems (COPs). In QHH/CS, a clustering strategy based on low-dimensional mapping method ...
详细信息
In this paper, a Q-learning based hyper-heuristic with clustering strategy (QHH/CS) is proposed for combinatorial optimization problems (COPs). In QHH/CS, a clustering strategy based on low-dimensional mapping method is devised to map initial population to a low-dimensional space, thus obtaining multiple subpopulations accounting for different search directions. To discover more promising search regions around each subpopulation, we propose a parallel Q-learning search mechanism composed of multiple search components, including multi-subpopulation Q-table, state extraction method, contribution-driven reward function, and deep mining local search actions. Relying on these search components, QHH/CS identifies the variations of the objective values of subpopulations to evaluate the solution features of COPs, whereby valuable information can be learned during the search process of the algorithm. To illustrate the effectiveness of QHH/CS, it is applied to solve the permutation flow-shop scheduling problem. We additionally assess QHH/CS through the well-known vehicle routing problem, which confirms the general search ability of the algorithm for COPs. Moreover, the convergence analysis of the QHH/CS algorithm is performed, providing theoretical guidance for the optimization process of the proposed algorithm. Results of experiments demonstrate that QHH/CS can find high-quality solutions to the solved problems.
Large language models (LLMs) have demonstrated remarkable capabilities across various domains, especially in text processing and generative tasks. Recent advancements in the reasoning capabilities of state-of-the-art ...
详细信息
暂无评论