We illustrate the use of the scientific method in algorithmics by surveying recent developments in route planning on road networks. Several speed-up techniques proposed in the past decade are intuitive, elegant, and v...
详细信息
We illustrate the use of the scientific method in algorithmics by surveying recent developments in route planning on road networks. Several speed-up techniques proposed in the past decade are intuitive, elegant, and very efficient in practice, but lacked a theoretical explanation of their observed performance. By introducing a formal definition of road networks, recent theoretical work closed this gap. It also predicted that a previously untested algorithm should be even faster, which was confirmed by a subsequent implementation.
In the late eighties and early nineties, three major exciting new developments land some ramifications) in the computation of minimum capacity cuts occurred and these developments motivated us to evaluate the old and ...
详细信息
In the late eighties and early nineties, three major exciting new developments land some ramifications) in the computation of minimum capacity cuts occurred and these developments motivated us to evaluate the old and new methods experimentally. We provide a brief overview of the most important algorithms for the minimum capacity cut problem and compare these methods both with problem instances from the literature and with problem instances originating from the solution of the traveling salesman problem by branch-and-cut.
PHAT is an open-source C++ library for the computation of persistent homology by matrix reduction, targeted towards developers of software for topological data analysis. We aim for a simple generic design that decoupl...
详细信息
PHAT is an open-source C++ library for the computation of persistent homology by matrix reduction, targeted towards developers of software for topological data analysis. We aim for a simple generic design that decouples algorithms from data structures without sacrificing efficiency or user-friendliness. We provide numerous different reduction strategies as well as data types to store and manipulate the boundary matrix. We compare the different combinations through extensive experimental evaluation and identify optimization techniques that work well in practical situations. We also compare our software with various other publicly available libraries for persistent homology. (C) 2016 Elsevier Ltd. All rights reserved.
Route planning applications designed for electric vehicles have to consider a number of additional constraints. With the limited range and comparatively long charging times, it is of utmost importance to consider ener...
详细信息
Route planning applications designed for electric vehicles have to consider a number of additional constraints. With the limited range and comparatively long charging times, it is of utmost importance to consider energy consumption in routing applications. However, recently published algorithmic approaches for electric vehicle routing focus solely on specific aspects of this problem, such as optimizing energy consumption as single criterion. In this work, we present first steps towards a holistic framework for computing shortest paths for electric vehicles with limited range. This includes the possibility of driving instructions, such as driving speed adjustments to save energy, realistic modeling of battery charging procedures, and the integration of turn costs.
This special issue of Computer Science Review features seven papers on foundations of adaptive networked societies of tiny artefacts. The introduction describes the motivation for the special issue and briefly overvie...
详细信息
This special issue of Computer Science Review features seven papers on foundations of adaptive networked societies of tiny artefacts. The introduction describes the motivation for the special issue and briefly overviews the contributions. (C). 2010 Elsevier Inc. All rights reserved.
A sunflower in a hypergraph is a set of hyperedges pairwise intersecting in exactly the same vertex set. Sunflowers are a useful tool in polynomial-time data reduction for problems formalizable as d-Hitting Set, the p...
详细信息
A sunflower in a hypergraph is a set of hyperedges pairwise intersecting in exactly the same vertex set. Sunflowers are a useful tool in polynomial-time data reduction for problems formalizable as d-Hitting Set, the problem of covering all hyperedges (whose cardinality is bounded from above by a constant d) of a hypergraph by at most k vertices. Additionally, in fault diagnosis, sunflowers yield concise explanations for "highly defective structures". We provide a linear-time algorithm that, by finding sunflowers, transforms an instance of d-Hitting Set into an equivalent instance comprising at most O(k (d) ) hyperedges and vertices. In terms of parameterized complexity, we show a problem kernel with asymptotically optimal size (unless ) and provide experimental results that show the practical applicability of our algorithm. Finally, we show that the number of vertices can be reduced to O(k (d-1)) with additional processing in O(k (1.5d) ) time-nontrivially combining the sunflower technique with problem kernels due to Abu-Khzam and Moser.
It has been a long-standing open problem to determine the exact randomized competitiveness of the 2-server problem, that is, the minimum competitiveness of any randomized online algorithm for the 2-server problem. For...
详细信息
It has been a long-standing open problem to determine the exact randomized competitiveness of the 2-server problem, that is, the minimum competitiveness of any randomized online algorithm for the 2-server problem. For deterministic algorithms the best competitive ratio that can be obtained is 2 and no randomized algorithm is known that improves this ratio for general spaces. For the line, Bartal et al. (1998) [2] give a 155/78 competitive algorithm, but their algorithm is specific to the geometry of the line. We consider here the 2-server problem over Cross Polytope Spaces M-24. We obtain an algorithm with competitive ratio of and show that this ratio is best possible. This algorithm gives the second non-trivial example of metric spaces with better than 2-competitive ratio. The algorithm uses a design technique called the knowledge state technique a method not specific to M-24. (C) 2010 Elsevier B.V. All rights reserved.
The energy efficiency of data processing is becoming more and more important for both economical and ecological reasons. In this paper, we take sorting of large data sets as a representative case study for data-intens...
详细信息
The energy efficiency of data processing is becoming more and more important for both economical and ecological reasons. In this paper, we take sorting of large data sets as a representative case study for data-intensive applications. Guided by theoretical algorithmic considerations and taking practical limitations into account, we carefully choose the components for building an energy-efficient computer for this task. These decisions are backed up by performance and power measurements of two competing options. Finally, we choose a low-power Intel Atom 330 processor, supported by four solid state disks, which have little power consumption and provide high bandwidths. Our sophisticated implementation of the sorting algorithms does not only feature great CPU efficiency. By employing overlapping, it loads all available resources in parallel, resulting in a good overall balance between I/O and computation. Using this setup, we beat the former records in the JouleSort category of the well-established Sort Benchmark for inputs from 10 GB to 1 TB of data, by factors of up to 5.1. This usually comes without a penalty in running time. We break another JouleSort record using a standard server machine, which showcases the general energy efficiency improvements in standard hardware over the years. Furthermore, we present the first-ever result in the 100 TB JouleSort category, on a large compute cluster. The results lead us to conclusions on how to design scalable energy-efficient systems for processing large data sets, such as combining relatively weak computing power with high bandwidth storage devices. We also speculate on the consequences of future hardware for the Sort Benchmark contest, and identify certain problems, also relating to the monetary cost of energy. (C) 2011 Elsevier Inc. All rights reserved.
As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and...
详细信息
As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not. So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection. For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory. Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms. (C) 2010 Elsevier B.V. All rights reserved.
In this paper we develop an efficient implementation for a k-means clustering algorithm. The algorithm is based on a combination of Lloyd's algorithm with random swapping of centers to avoid local minima. This app...
详细信息
In this paper we develop an efficient implementation for a k-means clustering algorithm. The algorithm is based on a combination of Lloyd's algorithm with random swapping of centers to avoid local minima. This approach was proposed by Mount (30). The novel feature of our algorithms is the use of coresets to speed up the algorithm. A coreset is a small weighted set of points that approximates the original point set with respect to the considered problem. We use a coreset construction described in (12). Our algorithm first computes a solution on a very small coreset. Then in each iteration the previous solution is used as a starting solution on a refined, i.e. larger, coreset. To evaluate the performance of our algorithm we compare it with algorithm KMHybrid (30) on typical 3D data sets for an image compression application and on artificially created instances. Our data sets consist of 300;000 to 4.9 million points. Our algorithm outperforms KMHybrid on most of these input instances. Additionally, the quality of the solutions computed by our algorithm deviates significantly less than that of KMHybrid. We conclude that the use of coresets has two effects. First, it can speed up algorithms significantly. Secondly, in variants of Lloyd's algorithm, it reduces the dependency on the starting solution and thus makes the algorithm more stable. Finally, we propose the use of coresets as a heuristic to approximate the average silhouette coefficient of clusterings. The average silhouette coefficient is a measure for the quality of a clustering that is independent of the number of clusters k. Hence, it can be used to compare the quality of clusterings for different sizes of k. To show the applicability of our approach we computed clusterings and approximate average silhouette coefficient for k = 1,...,100 for our input instances and discuss the performance of our algorithm in detail.
暂无评论