The Operations Research EXperiment Framework for Java (OREX-J) is an object-oriented software framework that helps users to design, implement and conduct computational experiments for the analysis of optimization algo...
详细信息
The Operations Research EXperiment Framework for Java (OREX-J) is an object-oriented software framework that helps users to design, implement and conduct computational experiments for the analysis of optimization algorithms. As it was designed in a generic way using object-oriented programming and design patterns, it is not limited to a specific class of optimization problems and algorithms. The purpose of the framework is to reduce the amount of manual labor required for conducting and evaluating computational experiments: OREX-J provides a generic, extensible data model for storing detailed data on an experimental design and its results. Those data can include algorithm parameters, test instance generator settings, the instances themselves, run-times, algorithm logs, solution properties, etc. All data are automatically saved in a relational database (MySQL, http://***/) by means of the object-relational mapping library Hibernate (http://***/). This simplifies the task of analyzing computational results, as even complex analyses can be performed using comparatively simple Structured Query Language (SQL) queries. Also, OREX-J simplifies the comparison of algorithms developed by different researchers: Instead of integrating other researchers' algorithms into proprietary test beds, researchers could use OREX-J as a common experiment framework. This paper describes the architecture and features of OREX-J and exemplifies its usage in a case study. OREX-J has already been used for experiments in three different areas: algorithms and reformulations for mixed-integer programming models for dynamic lot-sizing with substitutions, a simulation-based optimization approach for a stochastic multi-location inventory control model, and an optimization model for software supplier selection and product portfolio planning.
In this paper we propose the first experimental study of the fully dynamic single-source shortest-paths problem on directed graphs with positive real edge weights. In particular, we perform an experimentalanalysis of...
详细信息
Motivated by the problem in computational biology of reconstructing the series of chromosome inversions by which one organism evolved from another, we consider the problem of computing the shortest series of reversals...
详细信息
Motivated by the problem in computational biology of reconstructing the series of chromosome inversions by which one organism evolved from another, we consider the problem of computing the shortest series of reversals that transform one permutation to another. The permutations describe the order of genes on corresponding chromosomes, and a reversal takes an arbitrary substring of elements, and reverses their order. For this problem, we develop two algorithms: a greedy approximation algorithm, that finds a solution provably close to optimal in O(n(2)) time and O(n) space for n-element permutations, and a branch-and-bound exact algorithm, that finds an optimal solution in O(mL(n, n)) time and O(n(2)) space, where m is the size of the branch-and-bound search tree, and L(n, n) is the time tc, solve a linear program of n variables and n constraints. The greedy algorithm is the first to come within a constant factor of the optimum;it guarantees a solution that uses no more than twice the minimum number of reversals. The lower and upper bounds of the branch-and-bound algorithm are a novel application of maximum-weight matchings, shortest paths, and linear programming. In a series of experiments, we study the performance of an implementation on random permutations, and permutations generated by random reversals. For permutations differing by k random reversals, we find that the average upper bound on reversal distance estimates k to within one reversal for k < 1/2n and n less than or equal to 100. For the difficult case of random permutations, we find that the average difference between the upper and lower bounds is less than three reversals for n less than or equal to 50. Due to the tightness of these bounds, we can solve, to optimality, problems on 30 elements in a few minutes of computer time. This approaches the scale of mitochondrial genomes.
The recent research on the CVRP is being slowed down by the lack of a good set of benchmark instances. The existing, sets suffer from at least one of the following drawbacks: (i) became too easy for current algorithms...
详细信息
The recent research on the CVRP is being slowed down by the lack of a good set of benchmark instances. The existing, sets suffer from at least one of the following drawbacks: (i) became too easy for current algorithms;(ii) are too artificial;(iii) are too homogeneous, not covering the wide range of characteristics found in real applications. We propose a new set of 100 instances ranging from 100 to 1000 customers, designed in order to provide a more comprehensive and balanced experimental setting. Moreover, the same generating scheme was also used to provide an extended benchmark of 600 instances. In addition to having a greater discriminating ability to identify "which algorithm is better", these new benchmarks should also allow for a deeper statistical analysis of the performance of an algorithm. In particular, they will enable one to investigate how the characteristics of an instance affect its performance. We report such an analysis on state-of-the-art exact and heuristic methods. (C) 2016 Elsevier B.V. All rights reserved.
A GRASP (greedy randomized adaptive search procedure) is a multi-start metaheuristic for combinatorial optimization. We study the probability distributions of solution time to a sub-optimal target value in five GRASPs...
详细信息
A GRASP (greedy randomized adaptive search procedure) is a multi-start metaheuristic for combinatorial optimization. We study the probability distributions of solution time to a sub-optimal target value in five GRASPs that have appeared in the literature and for which source code is available. The distributions are estimated by running 12,000 independent runs of the heuristic. Standard methodology for graphical analysis is used to compare the empirical and theoretical distributions and estimate the parameters of the distributions. We conclude that the solution time to a sub-optimal target value fits a two-parameter exponential distribution. Hence, it is possible to approximately achieve linear speed-up by implementing GRASP in parallel.
Designing routing schedules is a pivotal aspect of smart delivery systems. Therefore, the field has been blooming for decades, and numerous algorithms for this task have been proposed for various formulations of rich ...
详细信息
Designing routing schedules is a pivotal aspect of smart delivery systems. Therefore, the field has been blooming for decades, and numerous algorithms for this task have been proposed for various formulations of rich vehicle routing problems. There is, however, an important gap in the state of the art that concerns the lack of an established and widely-adopted approach toward thorough verification and validation of such algorithms in practical scenarios. We tackle this issue and propose a comprehensive validation approach that can shed more light on functional and non-functional abilities of the solvers. Additionally, we propose novel similarity metrics to measure the distance between the routing schedules that can be used in verifying the convergence abilities of randomized techniques. To reflect practical aspects of intelligent transportation systems, we introduce an algorithm for elaborating solvable benchmark instances for any vehicle routing formulation, alongside the set of quality metrics that help quantify the real-life characteristics of the delivery systems, such as their profitability. The experiments prove the flexibility of our approach through utilizing it to the NP-hard pickup and delivery problem with time windows, and present the qualitative, quantitative, and statistical analysis scenarios which help understand the capabilities of the investigated techniques. We believe that our efforts will be a step toward the more critical and consistent evaluation of emerging vehicle routing (and other) solvers, and will allow the community to easier confront them, thus ultimately focus on the most promising research avenues that are determined in the quantifiable and traceable manner.
This paper presents an experimental comparison of a number of different algorithms for computing the Delaunay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepli...
详细信息
This paper presents an experimental comparison of a number of different algorithms for computing the Delaunay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distributions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimentalanalysis of how often implementations of these algorithms perform each operation. (C) 1997 Elsevier Science B.V.
Data generation for computational testing of optimization algorithms is a key topic in experimental algorithmics. Recently, concern has arisen that many published computational experiments are inadequate with respect ...
详细信息
Data generation for computational testing of optimization algorithms is a key topic in experimental algorithmics. Recently, concern has arisen that many published computational experiments are inadequate with respect to the way test instances are generated. In this paper we suggest a new research direction that might be useful to cope with the possible limitations of data generation. The basic idea is to select a finite set of instances which `represent' the whole set of instances. We propose a measure of the representativeness of an instance, which we call e-representativeness: for a minimization problem, an instance x(epsilon) is epsilon-representative of another instance x if a (1 + epsilon)-approximate solution to x can be obtained by solving x(epsilon). Focusing on a strongly NP-hard single machine scheduling problem, we show how to map the infinite set of all instances into a finite set of epsilon-representative core instances. We propose to use this finite set of epsilon-representative core instances to test heuristics. (c) 2004 Elsevier B.V. All rights reserved.
We consider the problem of computing a global alignment between two or more sequences subject to varying mismatch and indel penalties. We prove a tight 3(n/2pi)(2/3) + O(n(1/3)log n) bound on the worst-case number of ...
详细信息
We consider the problem of computing a global alignment between two or more sequences subject to varying mismatch and indel penalties. We prove a tight 3(n/2pi)(2/3) + O(n(1/3)log n) bound on the worst-case number of distinct optimum alignments for two sequences of length n as the parameters are varied. This refines a O(n(2/3)) upper bound by Gusfield et al., answering a question posed by Pevzner and Waterman. Our lower bound requires an unbounded alphabet. For strings over a binary alphabet, we prove a Q(n 1 :2) lower bound. For the parametric global alignment of k greater than or equal to 2 sequences under sum-of-pairs scoring we prove a 3(((k)(2))n/2pi)(2/3) + O(k(2/3)n(1/3) log n) upper bound on the number of distinct optimality regions and a Q(n 23) lower bound, partially answering a problem of Pevzner. Based on experimental evidence, we conjecture that for two random sequences, the number of optimality regions is approximately rootn with high probability. (C) 2002 Elsevier Science B.V. All rights reserved.
Mobile users are roaming in a zone of cells in a cellular network system. The probabilities of each user residing in each cell are known, and all probabilities are independent. The task is to find any one, or all, of ...
详细信息
ISBN:
(纸本)9783642131929
Mobile users are roaming in a zone of cells in a cellular network system. The probabilities of each user residing in each cell are known, and all probabilities are independent. The task is to find any one, or all, of the users, by paging the cells in a predetermined number of rounds. In each round, any subset of the cells can be paged. When a cell is paged, the list of users in it is returned. The paging process terminates when the required user(s) are found. The objective is to minimize the expected number of paged cells. Finding any one user is known as the yellow page problem, and finding all users is known as the conference call problem. The conference call problem has been proved NP-hard, and a polynomial time approximation scheme exists. We study both problems in a unified framework. We introduce three methods for computing the paging cost. We give a hierarchical classification of users. For certain classes of users, we either provide polynomial time optimal solutions, or provide relatively efficient exponential time solutions. We design a family of twelve fast greedy heuristics that generate competitive paging strategies. We implement optimal algorithms and non-optimal heuristics. We test the performance of our greedy heuristics on many patterns of input data with different parameters. We select the best heuristics for both problems based on our simulation. We evaluate their performances on randomly generated Zipf and uniform data and on real user data.
暂无评论