Harmful Internet hijacking incidents put in evidence how fragile interdomain routing is. In particular, the Border Gateway Protocol (BGP), which is used to exchange routing information between Internet entities, calle...
详细信息
Harmful Internet hijacking incidents put in evidence how fragile interdomain routing is. In particular, the Border Gateway Protocol (BGP), which is used to exchange routing information between Internet entities, called Autonomous Systems (ASes), proved to be prone to attacks launched by a single malicious AS. Recent research contributions pointed out that even S-BGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. The goal of the attack is to attract a traffic flow towards the malicious AS. While in the hijacking attack connectivity between the endpoints of a flow can be disrupted, in the interception attack connectivity must be maintained. We show that this problem marks a sharp difference between BGP and S-BGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP, it is NP-hard for S-BGP. Our study has several by-products. E.g., we solve a problem left open in the literature, stating when performing a hijacking in S-BGP is equivalent to performing an interception. (C) 2015 Elsevier B.V. All rights reserved.
In the paper, the computational complexity of several variants of the problem of isothermic DNA sequencing by hybridization, is analyzed. The isothermic sequencing is a recent method, in which isothermic oligonucleoti...
详细信息
In the paper, the computational complexity of several variants of the problem of isothermic DNA sequencing by hybridization, is analyzed. The isothermic sequencing is a recent method, in which isothermic oligonucleotide libraries are used during the hybridization with an unknown DNA fragment. The variants of the isothermic DNA sequencing problem with errors in the hybridization data, negative ones or positive ones, are proved to be strongly NP-hard. On the other hand, the polynomial time algorithm for the ideal case with no errors is proposed. (c) 2005 Elsevier B.V. All rights reserved.
Hilbert's Irreducibility Theorem is applied to find the upper bounds of the time complexities of various decision problems in arithmetical sentences and the following results are proved: 1. The decision problem of...
详细信息
Hilbert's Irreducibility Theorem is applied to find the upper bounds of the time complexities of various decision problems in arithmetical sentences and the following results are proved: 1. The decision problem of for all there exists sentences over an algebraic number field is in P. 2. The decision problem of for all there exists sentences over the collection of all fields with characteristic 0 is in P. 3. The decision problem of for all there exists sentences over a function field with characteristic p is polynomial time reducible to the factorization of polynomials over Z(p). 4. The decision problem of for all there exists sentences over the collection of all-fields with characteristic p is polynomial time reducible to the factorization of polynomials over Zp. 5. The decision problem of for all there exists sentences over the collection of all fields is polynomial time reducible to the factorization of integers over Z and the factorization of polynomials over finite fields. (c) 2008 Published by Elsevier Inc.
This paper presents complexity results about the satisfiability of modal Horn clauses for several modal propositional logics. Almost all these results are negative in the sense that restricting the input formula to mo...
详细信息
This paper presents complexity results about the satisfiability of modal Horn clauses for several modal propositional logics. Almost all these results are negative in the sense that restricting the input formula to modal Horn clauses does not decrease the inherent complexity of the satisfiability problem. We first show that, when restricted to modal Horn clauses, the satisfiability problem for any modal logic between K and S4 or between K and B is PSPACE-hard. As a result, the satisfiability of modal Horn clauses as well as the satisfiability of unrestricted formulas for any of K, T, B and S4 is PSPACE-complete. This result refutes the expectation (Farinas del Cerro and Penttonen 1987) of getting a polynomial-time algorithm for the satisfiability of modal Horn clauses for these logics as long as P not-equal PSPACE. Next, we consider S4.3 and extensions of K5 including K5, KD5, K45, KD45 and S5, the satisfiability problem for each of which in general is known to be NP-complete, and show that for each extension of K5, a polynomial-time algorithm for the satisfiability of modal Horn clauses can be obtained;but for S4.3, together with some linear tense logics closely related to S4.3 like CL, SL and PL, the satisfiability of modal Horn clauses still remains NP-complete.
An Ordered Binary Decision Diagram (BDD) is a graph representation of a Boolean function. According to its good properties, BDD's are widely used in various applications. In this paper, we investigate the computat...
详细信息
An Ordered Binary Decision Diagram (BDD) is a graph representation of a Boolean function. According to its good properties, BDD's are widely used in various applications. In this paper, we investigate the computational complexity of basic operations on BDD's. We consider two important operations: reduction of a BDD and binary Boolean operations based on BDD's. This paper shows that both the reduction of a BDD and the binary Boolean operations based on BDD's are NC1-reducible to REACHABILITY. That is, both of the problems belong to NC2. In order to extend the results to the BDD's with output inverters, we also considered the transformations between BDD's and BDD's with output inverters. We show that both of the transformations are also NC1-reducible to REACHABILITY.
Experimental results based on offline processing reported at optical conferences increasingly rely on neural network-based equalizers for accurate data recovery. However, achieving low-complexity implementations that ...
详细信息
Experimental results based on offline processing reported at optical conferences increasingly rely on neural network-based equalizers for accurate data recovery. However, achieving low-complexity implementations that are efficient for real-time digital signal processing remains a challenge. This paper addresses this critical need by proposing a systematic approach to designing and evaluating low-complexity neural network equalizers. Our approach focuses on three key phases: training, inference, and hardware synthesis. We provide a comprehensive review of existing methods for reducing complexity in each phase, enabling informed choices during design. For the training and inference phases, we introduce a novel methodology for quantifying complexity. This includes new metrics that bridge software-to-hardware considerations, revealing the relationship between complexity and specific neural network architectures and hyperparameters. We guide the calculation of these metrics for both feed-forward and recurrent layers, highlighting the appropriate choice depending on the application's focus (software or hardware). Finally, to demonstrate the practical benefits of our approach, we showcase how the computational complexity of neural network equalizers can be significantly reduced and measured for both teacher (biLSTM+CNN) and student (1D-CNN) architectures in different scenarios. This work aims to standardize the estimation and optimization of computational complexity for neural networks applied to real-time digital signal processing, paving the way for more efficient and deployable optical communication systems.
The theory of computational complexity has some interesting links to physics, in particular to quantum computing and statistical mechanics. This article contains an informal introduction to this theory and its links t...
详细信息
The theory of computational complexity has some interesting links to physics, in particular to quantum computing and statistical mechanics. This article contains an informal introduction to this theory and its links to physics.
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results ar...
详细信息
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results are ensemble averages over certain interesting graph classes such as random graphs or graphs with a particular degree distributions. In this paper, we focus instead on determining the expected spreading size and the probability of large spreadings for a single (but arbitrary) given network and study the computational complexity of these problems using reductions from well-known network reliability problems. We show that computing both quantities exactly is intractable, but that the expected spreading size can be efficiently approximated with Monte Carlo sampling. When nodes are weighted to reflect their importance, the problem becomes as hard as the s-t reliability problem, which is not known to yield an efficient randomized approximation scheme up to now. Finally, we give a formal complexity-theoretic argument why there is most likely no randomized constant-factor approximation for the probability of large spreadings, even for the unweighted case. A hybrid Monte Carlo sampling algorithm is proposed that resorts to specialized s-t reliability algorithms for accurately estimating the infection probability of those nodes that are rarely affected by the spreading process.
Data mining is to analyze all the data in a huge database and to obtain useful information for database users. One of the well-studied problems in data mining is the search for meaningful association rules in a market...
详细信息
Data mining is to analyze all the data in a huge database and to obtain useful information for database users. One of the well-studied problems in data mining is the search for meaningful association rules in a market basket database which contains massive amounts of sales transactions. The problem of mining meaningful association rules is to find ail the large itemsets first. and then tu construct meaningful association rules From the large itemsets. In our previous work, we have shown that it is NP-complete to decide whether there exists a large itemset with a given size. Also. we hale proposed a subclass of databases. called k-sparse databases, for which we can efficiently find all the large itemsets. Intuitively, k-sparsity of a database means that the supports of itemsets of size k or more are sufficiently low in the database. In this paper. we introduce the notion of (k, c)-sparsity, which is strictly weaker than the k-sparsity in our previous work. The value or c represents a degree of sparsity. Using (k, c)-sparsity, we propose a larger subclass of databases for which we can still efficiently find all the large itemsets. Next. we propose alternative measures to the support. Fur each measure, an itemset is called highly co-occurrent if the value indicating the correlation among the items exceeds a given threshold, In this paper, we define the highly co-occurrent itemset problem formally as deciding whether there exists a highly co-occurrent itemset with a given size. and show that the problem is NP-complete under whichever measure. Furthermore. based on the notion of (k, c)-sparsity, we propose subclasses of databases fur which we carl efficiently find all the highly co-occurrent itemsets.
This paper is concerned with a matrix inequality problem which arises in fixed order output feedback control design. This problem involves finding two symmetric and positive definitive matrices X and Y such that each ...
详细信息
This paper is concerned with a matrix inequality problem which arises in fixed order output feedback control design. This problem involves finding two symmetric and positive definitive matrices X and Y such that each satisfies a linear matrix inequality and that XY=I. it is well-known that many control problems such as fixed order output feedback stabilization, H-infinity control, guaranteed H-2 control, and mixed H-2/H-infinity, control can all be converted into the matrix inequality problem above, including static output feedback problems as a special case. We show, however, that this matrix inequality problem is NP-hard. (C) 1997 Elsevier Science B.V.
暂无评论