The single-vehicle cyclic inventory routing problem (SV-CIRP) consists of a repetitive distribution of a product from a single depot to a selected subset of customers. For each customer, selected for replenishments, t...
详细信息
The single-vehicle cyclic inventory routing problem (SV-CIRP) consists of a repetitive distribution of a product from a single depot to a selected subset of customers. For each customer, selected for replenishments, the supplier collects a corresponding fixed reward. The objective is to determine the subset of customers to replenish, the quantity of the product to be delivered to each and to design the vehicle route so that the resulting profit (difference between the total reward and the total logistical cost) is maximised while preventing stockouts at each of the selected customers. This problem appears often as a sub-problem in many logistical problems. In this article, the SV-CIRP is formulated as a mixed-integer program with a nonlinear objective function. After a thorough analysis of the structure of the problem and its features, an exact algorithm for its solution is proposed. This exact algorithm requires only solutions of linear mixed-integer programs. Values of a savings-based heuristic for this problem are compared to the optimal values obtained for a set of some test problems. In general, the gap may get as large as 25%, which justifies the effort to continue exploring and developing exact and approximation algorithms for the SV-CIRP.
In Schwerdfeger and Walter (2016), we proposed a subset sum based improvement procedure (denoted by LS) for solving the problem of minimizing the normalized sum of squared workload deviations on m identical machines. ...
详细信息
In Schwerdfeger and Walter (2016), we proposed a subset sum based improvement procedure (denoted by LS) for solving the problem of minimizing the normalized sum of squared workload deviations on m identical machines. In its core, the algorithm builds on an exact procedure (denoted by WB3) for the case of m = 3 machines. Although this approach proved to be superior to existing procedures in terms of solution quality, we identified room for further improvements. Building upon our prior work, in this follow-up paper we suggest an enhanced version of WB3 that leads to a significant speed-up of several orders of magnitude and we considerably improve on the performance of LS on difficult instances where the ratio of the number of jobs to the number of machines is small. Moreover, we investigate a simple surrogate balancing measure that can also be optimized by our algorithms with only a slight modification. Results of a comprehensive computational study on a large set of benchmark as well as random test instances demonstrate the effectiveness of the improved algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
This paper discusses the problem of assigning streams of requests (clients) to related server machines with the objective to minimize the sum of worst-case processing times. The completion time of a batch of requests ...
详细信息
This paper discusses the problem of assigning streams of requests (clients) to related server machines with the objective to minimize the sum of worst-case processing times. The completion time of a batch of requests is measured as a sum of weights of the subset of clients which share a single machine. Such problem can be seen as minimizing the sum of total weights of blocks of -partition, each multiplied by the cardinality of a block. We prove that this problem can be solved in polynomial time for any fixed and present an efficient backward induction algorithm.
We show that the CNF satisfiability problem can be solved in 0*(1.2226(m)) time, where m is the number of clauses in the formula, improving the known upper bounds 0*(1.234(m)) given by Yamamoto 15 years ago and 0*(1.2...
详细信息
We show that the CNF satisfiability problem can be solved in 0*(1.2226(m)) time, where m is the number of clauses in the formula, improving the known upper bounds 0*(1.234(m)) given by Yamamoto 15 years ago and 0*(1.239(m)) given by Hirsch 22 years ago. By using an amortized technique and careful case analysis, we successfully avoid the bottlenecks in previous algorithms and get the improvement. (C) 2021 Elsevier B.V. All rights reserved.
An author's profile on Google Scholar consists of indexed articles and associated data, such as the number of citations and the H-index. The author is allowed to merge articles;this may affect the H-index. We anal...
详细信息
An author's profile on Google Scholar consists of indexed articles and associated data, such as the number of citations and the H-index. The author is allowed to merge articles;this may affect the H-index. We analyze the (parameterized) computational complexity of maximizing the H-index using article merges. Herein, to model realistic manipulation scenarios, we define a compatibility graph whose edges correspond to plausible merges. Moreover, we consider several different measures for computing the citation count of a merged article. For the measure used by Google Scholar, we give an algorithm that maximizes the H-index in linear time if the compatibility graph has constant-size connected components. In contrast, if we allow to merge arbitrary articles (that is, for compatibility graphs that are cliques), then already increasing the H-index by one is NP-hard. Experiments on Google Scholar profiles of AI researchers show that the H-index can be manipulated substantially only if one merges articles with highly dissimilar titles. (C) 2016 Elsevier B.V. All rights reserved.
In the literature of the combinatorial optimization problems, it is a commonplace to find more than one mathematical model for the same problem. The significance of a model may be measured in terms of the efficiency o...
详细信息
In the literature of the combinatorial optimization problems, it is a commonplace to find more than one mathematical model for the same problem. The significance of a model may be measured in terms of the efficiency of the solution algorithms that can be built upon it. The purpose of this article is to present a new network model for the well known combinatorial optimization problem-the job shop scheduling problem. The new network model has similar structure as the disjunctive graph model except that it uses permutations of jobs as decision variables instead of the binary decision variables associated with the disjunctive arcs. To assess the significance of the new model, the performances of exact branch-and-bound algorithmic implementations that are based on both the new model and the disjunctive graph model are compared. (C) 2008 Elsevier Inc. All rights reserved.
The problem of finding the densest subgraph in a given graph has several real-world applications, particularly in areas like social network analysis, protein, and gene networks. Depending on the application, finding d...
详细信息
The problem of finding the densest subgraph in a given graph has several real-world applications, particularly in areas like social network analysis, protein, and gene networks. Depending on the application, finding dense subgraphs can be used to determine regions of high importance, similar characteristics, or enhanced interaction. The densest subgraph extraction problem is fundamentally a non-linear optimization problem. Nevertheless, it can be solved in polynomial time by an exact algorithm based on iteratively solving a series of max-flow subproblems. Despite its polynomial-time complexity, the computing time required by exact algorithms on very large graphs could be prohibitive. Thus, to approach graphs with millions of vertices and edges, one has to resort to heuristic algorithms. We provide an efficient implementation of a greedy heuristic from the literature that is extremely fast and has some nice theoretical properties. We also introduce a new heuristic algorithm that is built on top of the greedy and the exact methods. An extensive computational study is presented to evaluate the performance of various algorithms on a benchmark composed of 86 instances taken from the literature and real world. This analysis shows that the proposed heuristic algorithm is very effective on a large number of test instances, often providing either the optimal solution or a near-optimal solution within short computing times.
Centrality is a powerful concept for detecting influential components of a network applicable to various areas such as analysis of social, collaboration, and biological networks. In this study, we employ one of the we...
详细信息
Centrality is a powerful concept for detecting influential components of a network applicable to various areas such as analysis of social, collaboration, and biological networks. In this study, we employ one of the well-known centrality measures, closeness centrality, to detect a group of pairwise connected members (a highly connected community known as a clique) with the highest accessibility to the entire network. To measure the accessibility of a clique, we use two metrics, the maximum distance and the total distance to the clique from other members of the network. Hence, we are dealing with two variants of the most central clique problem referred to as maximum-distance-closeness-central clique and total-distance-closeness-central clique problems. We study the computational complexity of these two problems and prove that their decision versions are NP-complete. We also propose new mixed 0-1 integer programming formulations and the first combinatorial branch-and-bound algorithms to solve these problems exactly. We show that our algorithmic approaches offer at least 83-fold speed-up on over 96% of benchmark instances in comparison to existing approaches. (C) 2019 Elsevier B.V. All rights reserved.
The maximum independent set problem is one of the most important problems in graph algorithms and has been extensively studied in the line of research on the worst-case analysis of exact algorithms for NP-hard problem...
详细信息
The maximum independent set problem is one of the most important problems in graph algorithms and has been extensively studied in the line of research on the worst-case analysis of exact algorithms for NP-hard problems. In the weighted version, each vertex in the graph is associated with a weight and we are going to find an independent set of maximum total vertex weight. Many reduction rules for the unweighted version have been developed that can be used to effectively reduce the input instance without loss the optimality. However, it seems that reduction rules for the weighted version have not been systemically studied. In this paper, we design a series of reduction rules for the maximum weighted independent set problem and also design a fast exact algorithm based on the reduction rules. By using the measure-and-conquer technique to analyze the algorithm, we showthat the algorithm runs in O*(1.1443((0.624x-0.872)n)') time and polynomial space, where n' is the number of vertices of degree at least 2 and x is the average degree of these vertices in the graph. When the average degree is small, our running-time bound beats previous results. For example, on graphs with the minimum degree at least 2 and average degree at most 3.68, our running time bound is better than that of previous polynomial-space algorithms for graphs with maximum degree at most 4.
Container loading is a pivotal function for operating supply chains efficiently. Underperformance results in unnecessary costs (e.g. cost of additional containers to be shipped) and in an unsatisfactory customer servi...
详细信息
Container loading is a pivotal function for operating supply chains efficiently. Underperformance results in unnecessary costs (e.g. cost of additional containers to be shipped) and in an unsatisfactory customer service (e.g. violation of deadlines agreed to or set by clients). Thus, it is not surprising that container loading problems have been dealt with frequently in the operations research literature. It has been claimed though that the proposed approaches are of limited practical value since they do not pay enough attention to constraints encountered in practice. In this paper, a review of the state-of-the-art in the field of container loading will be given. We will identify factors which from a practical point of view need to be considered when dealing with container loading problems and we will analyze whether and how these factors are represented in methods for the solution of such problems. Modeling approaches, as well as exact and heuristic algorithms will be reviewed. This will allow for assessing the practical relevance of the research which has been carried out in the field. We will also mention several issues which have not been dealt with satisfactorily so far and give an outlook on future research opportunities. (C) 2012 Elsevier B.V. All rights reserved.
暂无评论