A stably bounded hypergraph H is a hypergraph together with four color-bound functions s, t, a and b, each assigning positive integers to the edges. A vertex coloring of H is considered proper if each edge E has at le...
详细信息
A stably bounded hypergraph H is a hypergraph together with four color-bound functions s, t, a and b, each assigning positive integers to the edges. A vertex coloring of H is considered proper if each edge E has at least s(E) and at most t(E) different colors assigned to its vertices, moreover each color occurs on at most b(E) vertices of E, and there exists a color which is repeated at least a(E) times inside E. The lower and the upper chromatic number of H is the minimum and the maximum possible number of colors, respectively, over all proper colorings. An interval hypergraph is a hypergraph whose vertex set allows a linear ordering such that each edge is a set of consecutive vertices in this order. We study the time complexity of testing colorability and determining the lower and upper chromatic numbers. A complete solution is presented for interval hypergraphs without overlapping edges. complexity depends both on problem type and on the combination of color-bound functions applied, except that all the three coloring problems are NP-hard for the function pair a, b and its extensions. For the tractable classes, linear-time algorithms are designed. It also depends on problem type and function set whether complexity jumps from polynomial to NP-hard if the instance is allowed to contain overlapping intervals. Comparison is facilitated with three handy tables which also include further structure classes. (C) 2012 Elsevier B.V. All rights reserved.
complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number ...
详细信息
complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).
A mixed radix algorithm for the in-place fast Fourier transform (FFT), which is broadly used in most embedded signal processing fields, can be explicitly expressed by an iterative equation based on the Cooley-Tukey al...
详细信息
ISBN:
(纸本)9781479928941
A mixed radix algorithm for the in-place fast Fourier transform (FFT), which is broadly used in most embedded signal processing fields, can be explicitly expressed by an iterative equation based on the Cooley-Tukey algorithm. The expression can be applied to either decimation-in-time (DIT) or decimation-in-frequency (DIF) FFTs with ordered inputs. For many newly emerging low power portable computing applications, such as mobile high definition video compressing, mobile fast and accurate satellite location, etc., the existing methods perform either resource consuming or non-flexible. In this paper, we propose a new addressing scheme for efficiently implementing mixed radix FFTs. In this scheme, we elaborately design an accumulator that can generate accessing addresses for the operands, as well as the twiddle factors. The analytical results show that the proposed scheme reduces the algorithm complexity meanwhile helps the designer to efficiently choose an arbitrary FFT to design the in-place architecture.
We solve a long-standing open problem concerning a discrete mathematical model, which has various applications in computer science and several other fields, including frequency assignment and many other problems on re...
详细信息
We solve a long-standing open problem concerning a discrete mathematical model, which has various applications in computer science and several other fields, including frequency assignment and many other problems on resource allocation. A mixed hypergraph is a triple , where is the set of vertices, and and are two set systems over , the families of so-called C-edges and D-edges, respectively. A vertex coloring of a mixed hypergraph is proper if every C-edge has two vertices with a common color and every D-edge has two vertices with different colors. A mixed hypergraph is colorable if it has at least one proper coloring;otherwise it is uncolorable. The chromatic inversion of a mixed hypergraph is defined as . Since 1995, it was an open problem wether there is a correlation between the colorability properties of a hypergraph and its chromatic inversion. In this paper we answer this question in the negative, proving that there exists no polynomial-time algorithm (provided that ) to decide whether both and are colorable, or both are uncolorable. This theorem holds already for the restricted class of 3-uniform mixed hypergraphs (i.e., where every edge has exactly three vertices). The proof is based on a new polynomial-time algorithm for coloring a special subclass of 3-uniform mixed hypergraphs. Implementation in C++ programming language has been tested. Further related decision problems are investigated, too.
In this paper we have studied the performance of chaotically encrypted rate-1/n convolutional encoders. The design goal is to achieve a crypto-coding system that possesses the error correction and encryption features....
详细信息
In this paper we have studied the performance of chaotically encrypted rate-1/n convolutional encoders. The design goal is to achieve a crypto-coding system that possesses the error correction and encryption features. The proposed system is able to achieve reasonable error correction performance for the proposed configuration with a certain encoder rate and constraint length. The performance of each system was determined by calculating the algorithmic complexity of the system outputs. complexity of each decoding algorithm is compared to the results of the alternative algorithm. Numerical evidence indicates that algorithmic complexity associated with particular rate-1/n convolutional encoders increases as constraint length increases, while error correcting capacity of the decoder expands. Both code design and constraint length are control architectural parameters that have significant effect on system performance.
The general task of network reliability analysis is this: given some probabilistic information about the possible failures of network components, we want to compute a global reliability metric for the network. This ge...
详细信息
ISBN:
(纸本)9781479900497
The general task of network reliability analysis is this: given some probabilistic information about the possible failures of network components, we want to compute a global reliability metric for the network. This general task can take many different forms, depending on the specific reliability metric. Numerous methods are known to accomplish it in various situations, precisely or approximately, but usually demanding significant algorithmic complexity. The issue we address is that what happens if the input data is unreliable, i.e., it is only known with limited accuracy. We propose a mathematical approach that can estimate and bound the resulting error, in terms of the input inaccuracy. It is particularly interesting that the method applies to a very broad class of models, independently of the actual reliability model that is chosen from the class. This feature allows wide applicability of our method, and also makes possible the handling of uncertainties in the considered model itself, not only in the input data.
The design and the debugging of large distributed AI systems require abstraction tools to build tractable macroscopic descriptions. Data aggregation provides such tools by partitioning the system dimensions into aggre...
详细信息
ISBN:
(纸本)9781479929023
The design and the debugging of large distributed AI systems require abstraction tools to build tractable macroscopic descriptions. Data aggregation provides such tools by partitioning the system dimensions into aggregated pieces of information. Since this process leads to information losses, the partitions should be chosen with the greatest caution. While the number of possible partitions grows exponentially with the size of the system, this paper proposes an algorithm that exploits exogenous constraints regarding the system semantics in order to find the best partitions in a linear or polynomial time. Two constrained sets of partitions (hierarchical and ordered) are detailed and applied to spatial and temporal aggregation of an agent-based model of international relations. The algorithm succeeds in providing meaningful high-level abstractions for the system analysis.
The 75th anniversary of Turing's seminal paper and his centennial anniversary occur in 2011 and 2012, respectively. It is natural to review and assess Turing's contributions in diverse fields in the light of n...
详细信息
The 75th anniversary of Turing's seminal paper and his centennial anniversary occur in 2011 and 2012, respectively. It is natural to review and assess Turing's contributions in diverse fields in the light of new developments that his thought has triggered in many scientific communities. Here, the main idea is to discuss how the work of Turing allows us to change our views on the foundations of Mathematics, much as quantum mechanics changed our conception of the world of Physics. Basic notions like computability and universality are discussed in a broad context, placing special emphasis on how the notion of complexity can be given a precise meaning after Turing, i.e., not just qualitatively but also quantitatively Turing's work is given some historical perspective with respect to some of his precursors, contemporaries and mathematicians who took his ideas further.
Given two comparative maps, that is two sequences of markers each representing a genome, the Maximal Strip Recovery problem (MSR) asks to extract a largest sequence of markers from each map such that the two extracted...
详细信息
Given two comparative maps, that is two sequences of markers each representing a genome, the Maximal Strip Recovery problem (MSR) asks to extract a largest sequence of markers from each map such that the two extracted sequences are decomposable into non-intersecting strips (or synteny blocks). This aims at defining a robust set of synteny blocks between different species, which is a key to understand the evolution process since their last common ancestor. In this paper, we add a fundamental constraint to the initial problem, which expresses the biologically sustained need to bound the number of intermediate (non-selected) markers between two consecutive markers in a strip. We therefore introduce the problem delta-gap-MSR, where delta is a (usually small) non-negative integer that upper bounds the number of non-selected markers between two consecutive markers in a strip. We show that, if we restrict ourselves to comparative maps without duplicates, the problem is polynomial for delta = 0, NP-complete for delta = 1, and APX-hard for delta >= 2. For comparative maps with duplicates, the problem is APX-hard for all delta >= 0. (C) 2012 Elsevier B. V. All rights reserved.
We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors ha...
详细信息
We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors have bounded depth. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.
暂无评论