We prove several tight results on the fine-grained complexity of approximating the diameter of a graph. First, we prove that, for any l > 0, assuming the Strong Exponential Time Hypothesis (SETH), there are no near...
详细信息
ISBN:
(纸本)9781450380539
We prove several tight results on the fine-grained complexity of approximating the diameter of a graph. First, we prove that, for any l > 0, assuming the Strong Exponential Time Hypothesis (SETH), there are no near-linear time 2 - epsilon-approximation algorithms for the Diameter of a sparse directed graph, even in unweighted graphs. This result shows that a simple near-linear time 2-approximation algorithm for Diameter is optimal under SETH, answering a question from a survey of Rubinstein and Vassilevska-Williams (SIGACT '19) for the case of directed graphs. In the same survey, Rubinstein and Vassilevska-Williams also asked if it is possible to show that there are no 2 - epsilon approximation algorithms for Diameter in a directed graph in O(n(1.499)) time. We show that, assuming a hypothesis called NSETH, one cannot use a deterministic SETH-based reduction to rule out the existence of such algorithms. Extending the techniques in these two results, we characterize whether a 2 - epsilon approximation algorithm running in time O(n(1+delta)) for the Diameter of a sparse directed unweighted graph can be ruled out by a deterministic SETH-based reduction for every delta is an element of (0, 1) and essentially every epsilon is an element of (0, 1), assuming NSETH. This settles the SETH-hardness of approximating the diameter of sparse directed unweighted graphs for deterministic reductions, up to NSETH. We make the same characterization for randomized SETH-based reductions, assuming another hypothesis called NUNSETH. We prove additional hardness and non-reducibility results for undirected graphs.
For standard algorithms verifying positive definiteness of a matrix A is an element of M-n(R) based on Sylvester's criterion, the computationally pessimistic case is this when A is positive definite. We present an...
详细信息
ISBN:
(纸本)9781509057078
For standard algorithms verifying positive definiteness of a matrix A is an element of M-n(R) based on Sylvester's criterion, the computationally pessimistic case is this when A is positive definite. We present an algorithm realizing the same task for A is an element of M-n(Z), for which the case when A is positive definite is the optimistic one. The algorithm relies on performing certain edge transformations, called inflations, on the signed graph (bigraph) Delta = Delta(A) associated with A. We provide few variants of the algorithm, including Las Vegas type randomized ones (with precisely described maximal number of steps). The algorithms work very well in practice, in many cases with a better speed than the standard tests. On the other hand, our results provide an interesting example of an application of symbolic computing methods originally developed for different purposes, with a big potential for further generalizations in matrix problems.
Continuous condition monitoring and inspection of under-construction highway retaining walls is essential to ensure that construction performance criteria are met. The use of LIDAR systems by the construction industry...
详细信息
The study proposes an automated procedure for linking an identification algorithm implemented in a general-purpose environment (MatLab) with a commercial finite-element code for magnetic field analysis (VF-Opera). The...
详细信息
The study proposes an automated procedure for linking an identification algorithm implemented in a general-purpose environment (MatLab) with a commercial finite-element code for magnetic field analysis (VF-Opera). The method is used to identify the B-H curves of anisotropic magnetic laminations in the direction normal to the sheet surface with the aid of a multiprocessor computer to benefit from parallel computing. The proposed approach, based on a non-deterministic algorithm, makes it possible to determine the entire B-H curve and is therefore an advancement on the commonly used assumption of constant permeability.
Carrier-sense Medium Access Control (MAC) protocols such as the IEEE 802.11 Distributed Coordination Function (DCF) avoid collisions by holding up pending packet transmission requests when a carrier signal is observed...
详细信息
Carrier-sense Medium Access Control (MAC) protocols such as the IEEE 802.11 Distributed Coordination Function (DCF) avoid collisions by holding up pending packet transmission requests when a carrier signal is observed above a certain threshold. However, this often results in unnecessarily conservative communication, thus making it difficult to maximize the utilization of the spatial spectral resource. This paper shows that a higher aggregate throughput can be achieved by allowing more concurrent communications and adjusting the communication distance on the fly, which needs provisions for the following two areas: On the one hand, carrier sense-based MAC protocols do not allow aggressive communication attempts when they are within the carrier senseable area. On the other hand, the communication distance is generally neither short nor adjustable because multihop routing protocols strive for providing minimum hop paths. This paper proposes a new MAC algorithm, called Multiple Access with Salvation Army (MASA), which adopts less sensitive carrier sensing to promote more concurrent communications and adjusts the communication distance adaptively via "packet salvaging" at the MAC layer. Extensive simulation based on the ns-2 has shown MASA to outperform the DCF, particularly in terms of packet delay. We also discuss the implementation of MASA based on the DCF specification.
This paper considers conditions for generating optimal two-level AND-EXOR representations using rewrite rules. Four results are presented. First, a necessary condition for obtaining minimality is a temporary increase ...
详细信息
This paper considers conditions for generating optimal two-level AND-EXOR representations using rewrite rules. Four results are presented. First, a necessary condition for obtaining minimality is a temporary increase in the size of expressions during minimization. Second, a sufficient condition for obtaining minimality consists of adding certain two rules to rule sets proposed in the literature. Third, we define transformations that allow the minimization of an expression to proceed by minimizing a transformed expression instead. Fourth, we determine experimentally that the above three theoretical results lead to better benchmarks results as well.
A recursive version of the Turing machine model is used to analyze the time and storage complexity of recursive algorithms. Hierarchy theorems are proven for time and for width of recursion (the amount of storage used...
详细信息
A recursive version of the Turing machine model is used to analyze the time and storage complexity of recursive algorithms. Hierarchy theorems are proven for time and for width of recursion (the amount of storage used at a level). A particular language is shown to be the “hardest” language to recognize without recursion. Previous results relating recursive and non-recursive time bounded computations are sharpened.
GEDAN KEN is an experimental programming language with the following characteristics. (1) Any value which is permitted in some context of the language is permissible in any other meaningful context. In particular, fun...
详细信息
GEDAN KEN is an experimental programming language with the following characteristics. (1) Any value which is permitted in some context of the language is permissible in any other meaningful context. In particular, functions and labels are permissible results of functions and values of variables. (2) Assignment and indirect addressing are formalized by introducing values, called references, which in turn possess other values. The assignment operation always affects the relation between some reference and its value. (3) All compound data structures are treated as functions. (4) Type declarations are not permitted. The functional approach to data structures and the use of references insure that any process which accepts some data structure will accept any logically equivalent structure, regards less of its internal representation. More generally, any data structure may be implicit; i.e. it may be specified by giving an arbitrary algorithm for computing or accessing its components. The existence of label variables permits the construction of co-routines, quasi-parallel processes, and other unorthodox control mechanisms. A variety of programming examples illustrates the generality of the language. Limitations and possible extensions are discussed briefly. [ABSTRACT FROM AUTHOR]
暂无评论