Many NP-complete problems on planar graphs are "fixed-parameter tractable:" Recent theoretical work provided tree decomposition-based fixed-parameter algorithms exactly solving various parameterized problems...
详细信息
Many NP-complete problems on planar graphs are "fixed-parameter tractable:" Recent theoretical work provided tree decomposition-based fixed-parameter algorithms exactly solving various parameterized problems on planar graphs, among others VERTEX COVER, in time O(c(rootk)n). Here, c is some constant depending on the graph problem to be solved, n is the number of graph vertices, and k is the problem parameter (for VERTEX COVER this is the size of the vertex cover). In this paper, we present an experimental study for such tree decomposition-based algorithms focusing on VERTEX COVER. We demonstrate that the tree decomposition-based approach provides a valuable way of exactly solving VERTEX COVER on planar graphs. Doing so, we also demonstrate the impressive power of the so-called Nemhauser/Trotter theorem which provides a VERTEX COVER-specific, extremely useful data reduction through polynomial time preprocessing. Altogether, this underpins the practical importance of the underlying theory. (C) 2004 Elsevier B.V. All rights reserved.
We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold-rapid development and improved upper bounds. Many search tree algorithms ...
详细信息
We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold-rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed "hand-made" search trees. Among others, such an example is given with the NP-complete Cluster Editing problem (also known as Correlation Clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for Cluster Editing had worst-case size O(2.27(k)), which now is improved to O(1.92(k)) due to our new method. (Herein, k denotes the number of edge modifications allowed.).
The goal of this paper is to prove the applicability of algorithm engineering and software design concepts to geometric computing through a vertical case study on the implementation of planar point location algorithms...
详细信息
The goal of this paper is to prove the applicability of algorithm engineering and software design concepts to geometric computing through a vertical case study on the implementation of planar point location algorithms. The work is presented within the framework of the GEOMLIB project, aimed at developing an easy to use, reliable, and flexible library of robust and efficient geometric algorithms. We present the criteria that have inspired the preliminary design of GEOMLIB and discuss the guidelines that we have followed in the initial implementation.
A wide number of practical applications would benefit from automatically generated graphical representations of database schemas. in which tables are represented by boxes, and table attributes correspond to distinct s...
详细信息
A wide number of practical applications would benefit from automatically generated graphical representations of database schemas. in which tables are represented by boxes, and table attributes correspond to distinct stripes inside each table. Links, connecting attributes of two different tables, represent referential constraints or join relationships, and may attach arbitrarily to the left- or to the right-hand side of the stripes representing the attributes. To our knowledge no drawing technique is available to automatically produce diagrams in such a strongly constrained drawing convention. In this paper we provide a polynomial time algorithm for solving this problem, and test its efficiency and effectiveness against a large test suite. Also, we describe an implementation of a system that uses such an algorithm and we study the main methodological problems we faced in developing such a technology. Copyright (C) 2002 John Wiley Sons, Ltd.
An algorithm component is an implementation of an algorithm which is not intended to be a stand-alone module, but to perform a specific task within a large software package or even within several distinct software pac...
详细信息
An algorithm component is an implementation of an algorithm which is not intended to be a stand-alone module, but to perform a specific task within a large software package or even within several distinct software packages. Therefore, the design of algorithm components must also incorporate software-engineering aspects. A key design goal is adaptability. This goal is important for maintenance throughout a project, prototypical development, and reuse in new, unforeseen contexts. From a theoretical viewpoint most algorithms apply to a range of possible use scenarios. Ideally, each algorithm is implemented by one algorithm component, which is easily, safely, and efficiently adaptable to all of these contexts. Various techniques have been developed for the design and implementation of algorithm components. However, a common basis for systematic, detailed evaluations and comparisons in view of the real practical needs is still missing. Basically, this means a set of concrete criteria, which specify what sort of adaptability is really required in practice, and which are well-justified by convincing, representative use scenarios. This paper is intended to be a first "milestone" on the way towards such a system of criteria. We will present a set of concrete goals, which are general and problem-independent and might appear ubiquitously in the algorithmic realm. These goals are illustrated, motivated, and justified by an extensive requirements analysis for a particular algorithm from a particular algorithmic domain: Dijkstra's algorithm for shortest paths in networks. Clearly, the field of algorithmics might be too versatile to allow a comprehensive, yet concise set of precise, justified criteria. Even a domain as restricted as graph and network algorithms includes aspects that are not fully understood. The analysis will include a discussion of the limits of the case study and the scope of the goals. The case study was chosen because it seems to be close to the "borderline" b
In the late eighties and early nineties, three major exciting new developments land some ramifications) in the computation of minimum capacity cuts occurred and these developments motivated us to evaluate the old and ...
详细信息
In the late eighties and early nineties, three major exciting new developments land some ramifications) in the computation of minimum capacity cuts occurred and these developments motivated us to evaluate the old and new methods experimentally. We provide a brief overview of the most important algorithms for the minimum capacity cut problem and compare these methods both with problem instances from the literature and with problem instances originating from the solution of the traveling salesman problem by branch-and-cut.
This paper presents an application of a customized linear programming (LP) based model predictive control strategy to the paper machine cross direction (CD) control problem. The objective of CD control is to maintain ...
详细信息
This paper presents an application of a customized linear programming (LP) based model predictive control strategy to the paper machine cross direction (CD) control problem. The objective of CD control is to maintain flat profiles of variables of interest by minimizing worst case deviations from setpoints (defects). These control problems can have as many as 200 actuators (inputs) and 400 sensor measurements (outputs). This large size coupled with the stringent real-time requirement of computing a control move in a few seconds poses a very challenging control problem. Computational results that demonstrate the effectiveness of this strategy will be presented. For typical disturbances this algorithm can compute provably optimal control moves for a 400 input x 400 output control problem in approximately 5 s versus approximately 90 s for a generic LP algorithm on a HP 9000/770 workstation. (C) 1999 Elsevier Science Ltd. All rights reserved.
Megson and Comish (1994) have defined techniques using a systematic approach to derive a parallel algorithm for Riccati type equations. In this paper these techniques will be used to derive a full three-dimensional Hi...
详细信息
Megson and Comish (1994) have defined techniques using a systematic approach to derive a parallel algorithm for Riccati type equations. In this paper these techniques will be used to derive a full three-dimensional Hierarchical Signal Flow Graph (HSFG) for the square-root covariance Kalman filter (SRCKF) algorithm. This novel flow graph involves matrix-matrix multiplication, Givens rotations and the Schur complement algorithms. Simple projection techniques detailed in (Megson, 1992;S.V. Rajopadhye and R.M. Fujimoto, 1990) are used to form two-dimensional descriptions of the algorithm. From these, parallel algorithms for the SRCKF can be generated using algorithm engineering techniques. As an extended example of algorithm engineering techniques given by Megson (1992, 1994), the parallel algorithms in the form of systolic architectures proposed by Gaston et al. (1990) and by Brown and Gaston (1994), are derived in detail.
Text searching programs such as the UNIX system tools grep and egrep require more than just good algorithms; they need to make efficient use of system resources such as I/O. I describe improving the I/O management in ...
详细信息
Text searching programs such as the UNIX system tools grep and egrep require more than just good algorithms; they need to make efficient use of system resources such as I/O. I describe improving the I/O management in grep and egrep by using a new fast I/O library fio to replace the normal I/O library stdio . I also describe incorporating the Boyer-Moore algorithm into egrep ; egrep is now typically 8–10 (for some common patterns 30–40) times faster than grep .
暂无评论