The schematization of simple polygons into octilinear polygons is a computational problem arising in the context of electronic CAD systems. In particular, it appears when the generation and propagation of electromagne...
详细信息
ISBN:
(数字)9783642311253
ISBN:
(纸本)9783642311246;9783642311253
The schematization of simple polygons into octilinear polygons is a computational problem arising in the context of electronic CAD systems. In particular, it appears when the generation and propagation of electromagnetic noise into real-world multi-layer PCBs has to be detected. In this paper we provide a fast and simple heuristic for solving such a problem. Experimental evaluations over real-world data show the efficiency and the good performance of the proposed approach.
Hyperbolicity is a graph parameter related to how much a graph resembles a tree with respect to distances. Its computation is challenging as the main approaches consist in scanning all quadruples of the graph or using...
详细信息
ISBN:
(纸本)9781611977042
Hyperbolicity is a graph parameter related to how much a graph resembles a tree with respect to distances. Its computation is challenging as the main approaches consist in scanning all quadruples of the graph or using fast matrix multiplication as building block, both are not practical for large graphs. In this paper, we propose and evaluate an approach that uses a hierarchy of distance-k dominating sets to reduce the search space. This technique, compared to the previous best practical algorithms, enables us to compute the hyperbolicity of graphs with unprecedented size (up to a million nodes).
It has been a long-standing open problem to determine the exact randomized competitiveness of the 2-server problem, that is, the minimum competitiveness of any randomized online algorithm for the 2-server problem. For...
详细信息
ISBN:
(纸本)9783540779179
It has been a long-standing open problem to determine the exact randomized competitiveness of the 2-server problem, that is, the minimum competitiveness of any randomized online algorithm for the 2-server problem. For deterministic algorithms the best competitive ratio that can be obtained is 2 and no randomized algorithm is known that improves this ratio for general spaces. For the line, Bartal et al. [2] give a 155/8 competitive algorithm, but their algorithm is specific to the geometry of the line. We consider here the 2-server problem over Cross Polytope Spaces M-2,M-4. We obtain an algorithm with competitive ratio of 19/12 and show that this ratio is best possible. This algorithm gives the second nontrivial example of metric spaces with better than 2 competitive ratio. The algorithm uses a design technique called the knowledge state technique - a method not specific to M-2,M-4.
This paper studies the r-range search problem for curves under the continuous Frechet distance: given a dataset S of n polygonal curves and a threshold r > 0, construct a data structure that, for any query curve q,...
详细信息
ISBN:
(数字)9783030247669
ISBN:
(纸本)9783030247669;9783030247652
This paper studies the r-range search problem for curves under the continuous Frechet distance: given a dataset S of n polygonal curves and a threshold r > 0, construct a data structure that, for any query curve q, efficiently returns all entries in S with distance at most r from q. We propose FRESH, an approximate and randomized approach for r-range search, that leverages on a locality sensitive hashing scheme for detecting candidate near neighbors of the query curve, and on a subsequent pruning step based on a cascade of curve simplifications. We experimentally compare FRESH to exact and deterministic solutions, and we show that high performance can be reached by suitably relaxing precision and recall.
The increasing amount of data to be processed by computers has led to the need for highly efficient algorithms for various computational problems. Moreover, the algorithms should be as simple as possible to be practic...
详细信息
ISBN:
(纸本)3540345973
The increasing amount of data to be processed by computers has led to the need for highly efficient algorithms for various computational problems. Moreover, the algorithms should be as simple as possible to be practically applicable. In this paper we propose a very simple approximation algorithm for finding the diameter and the radius of an undirected graph. The algorithm runs in O(m root n) time and gives an additive error of O(root n) for a graph with n vertices and m edges. Practical experiments show that the results of our algorithm are close to the optimum and compare favorably to the 2/3-approximation algorithm for the diameter problem by Aingworth et al [1].
Determining travel demand within a region of interest takes a considerable calibration effort, requiring transportation surveys, traffic counts, and empirical trip volumes. However, there is a need for demand calculat...
详细信息
ISBN:
(纸本)9781450369091
Determining travel demand within a region of interest takes a considerable calibration effort, requiring transportation surveys, traffic counts, and empirical trip volumes. However, there is a need for demand calculation without substantial calibration, for example to generate large-scale benchmark data for evaluating transportation algorithms. In this work, we present several approaches for demand calculation that take as input only publicly available data, such as population and POI densities. Our algorithms build upon the recently proposed radiation model, which is inspired by job search models in economics. We show that a straightforward implementation of the radiation model does not scale to continental road networks, taking months even on a modern 16-core server. Therefore, we introduce more scalable implementations, substantially decreasing the running time by five orders of magnitude from months to seconds. An extensive experimental evaluation shows that the output of our algorithms is in accordance with demand data used in production systems. Compared to simple approaches previously used in algorithmic publications to generate benchmark data, our algorithms output demand data of better quality, take less time, and have similar implementation complexity.
Current business activity scheduling systems are designed to maximise efficiency in terms of business metrics, but rarely take personal preferences into account. The goal of this work is to provide a formalisation and...
详细信息
ISBN:
(纸本)9798400701207
Current business activity scheduling systems are designed to maximise efficiency in terms of business metrics, but rarely take personal preferences into account. The goal of this work is to provide a formalisation and an experimental test of a SCRUM-like personalised scheduling system based on many-objective optimization. We propose a many-objective optimization framework in which the a priori preferences of the human decision makers (DMs) target different regions in the search space, from which the optimization retrieves alternative solutions that allow a posteriori decision making. The implementation of the proposed approach is based on a combination of NSGA-II and two custom operators aimed at proximity-enhanced mutation as well as search-space mapping. Our results show that the proposed method is able to generate diverse schedules. However, the aspiration values (a priori preferences) used in different system configurations notably influence the result characteristics, calling for detailed analysis of the inter-relationships between preferences and the shape of the resulting pareto front. The modelling approach is considered as a successful proof of principle for the intended use case at an abstract level.
In this paper we develop an efficient implementation for a k-means clustering algorithm. The algorithm is based on a combination of Lloyd's algorithm with random swapping of centers to avoid local minima. This app...
详细信息
ISBN:
(纸本)1595933409
In this paper we develop an efficient implementation for a k-means clustering algorithm. The algorithm is based on a combination of Lloyd's algorithm with random swapping of centers to avoid local minima. This approach was proposed by Mount (30). The novel feature of our algorithms is the use of coresets to speed up the algorithm. A coreset is a small weighted set of points that approximates the original point set with respect to the considered problem. We use a coreset construction described in (12). Our algorithm first computes a solution on a very small coreset. Then in each iteration the previous solution is used as a starting solution on a refined, i.e. larger, coreset. To evaluate the performance of our algorithm we compare it with algorithm KMHybrid (30) on typical 3D data sets for an image compression application and on artificially created instances. Our data sets consist of 300;000 to 4.9 million points. Our algorithm outperforms KMHybrid on most of these input instances. Additionally, the quality of the solutions computed by our algorithm deviates significantly less than that of KMHybrid. We conclude that the use of coresets has two effects. First, it can speed up algorithms significantly. Secondly, in variants of Lloyd's algorithm, it reduces the dependency on the starting solution and thus makes the algorithm more stable. Finally, we propose the use of coresets as a heuristic to approximate the average silhouette coefficient of clusterings. The average silhouette coefficient is a measure for the quality of a clustering that is independent of the number of clusters k. Hence, it can be used to compare the quality of clusterings for different sizes of k. To show the applicability of our approach we computed clusterings and approximate average silhouette coefficient for k = 1,...,100 for our input instances and discuss the performance of our algorithm in detail.
given a simple graph G = (V, E), the problem dealt with in this paper ask to transform a graph G by, only removing a minimum number of edges, into a disjoint union of cliques. This optimization problem is known to be ...
详细信息
ISBN:
(纸本)9781614995890;9781614995883
given a simple graph G = (V, E), the problem dealt with in this paper ask to transform a graph G by, only removing a minimum number of edges, into a disjoint union of cliques. This optimization problem is known to be NP-hard and is referred to as the Cluster Deletion problem (CD). This observation has motivated us to improve the chance for finding a disjoint union of cliques in a limited amount of search. To this end, we propose an encoding of CD in terms of a Weighted Constraint Satisfaction Problem (WCSP), a framework which has been widely used in solving hard combinatorial problems. We compare our approach with a fixed-parameter tractability FPT algorithm, which is one of the most used algorithm for solving the Cluster Deletion problem. Then, we experimentally show that the best results are obtained using the new encoding. We report a comparison of the quality and running times of WCSP and FPT algorithms on both random graphs and protein similarity graphs derived from the COG dataset.
We study the computation of good alternatives to the shortest path in road networks. Our approach is based on single via-node routing on top of contraction hierarchies and achieves superior quality and efficiency comp...
详细信息
暂无评论