In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insur...
详细信息
In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem—invertible transformations on systems having at least one qubit subsystem—we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtain
In this dissertation, we study the dirty data evaluation and repairing problem in relational database. Dirty data is usually inconsistent, inaccurate, incomplete and stale. Existing methods and theories of consistency...
详细信息
In this dissertation, we study the dirty data evaluation and repairing problem in relational database. Dirty data is usually inconsistent, inaccurate, incomplete and stale. Existing methods and theories of consistency describe using integrity constraints, such as data dependencies. However, integrity constraints are good at detection but not at evaluating the degree of data inconsistency and cannot guide the data repairing. This dissertation first studies the computational complexity of and algorithms for the database inconsistency evaluation. We define and use the minimum tuple deletion to evaluate the database inconsistency. For such minimum tuple deletion problem, we study the relationship between the size of rule set and its computational complexity. We show that the minimum tuple deletion problem is NP-hard to approximate the minimum tuple deletion within 17/16 if given three functional dependencies and four attributes involved. A near optimal approximated algorithm for computing the minimum tuple deletion is proposed with a ratio of 2 − 1/2r , where r is the number of given functional dependencies. To guide the data repairing, this dissertation also investigates the data repairing method by using query feedbacks, formally studies two decision problems, functional dependency restricted deletion and insertion propagation problem, corresponding to the feedbacks of deletion and insertion. A comprehensive analysis on both combined and data complexity of the cases is provided by considering different relational operators and feedback types. We have identified the intractable and tractable cases to picture the complexity hierarchy of these problems, and provided the efficient algorithm on these tractable cases. Two improvements are proposed, one focuses on figuring out the minimum vertex cover in conflict graph to improve the upper bound of tuple deletion problem, and the other one is a better dichotomy for deletion and insertion propagation problems at the absence o
We are interested in computation of arbitrage condition in financial market with friction. We consider a deterministic model with a finite number of financial assets and a finite number of possible states of nature. T...
详细信息
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. Over the last decad...
详细信息
ISBN:
(纸本)1932432590
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. Over the last decade, a variety of SMT algorithms have been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT. Our work aims at providing useful insights into the the computational complexity of those problems. We prove that while IBM Models 1-2 are conceptually and computationally simple, computations involving the higher (and more useful) models are hard. Since it is unlikely that there exists a polynomial time solution for any of these hard problems (unless P = NP and P#P = P), our results highlight and justify the need for developing polynomial time approximations for these computations. We also discuss some practical ways of dealing with complexity.
computational complexity studies the intrinsic difficulty of solving mathematically posed problems. Discrete computational complexity studies discrete problems and often uses the Turing machine model of computation. C...
详细信息
We initiate the study of computational complexity of graph coverings, aka locally bijective graph homomorphisms, for graphs with semi-edges. The notion of graph covering is a discretization of coverings between surfac...
详细信息
ISBN:
(纸本)9783959772013
We initiate the study of computational complexity of graph coverings, aka locally bijective graph homomorphisms, for graphs with semi-edges. The notion of graph covering is a discretization of coverings between surfaces or topological spaces, a notion well known and deeply studied in classical topology. Graph covers have found applications in discrete mathematics for constructing highly symmetric graphs, and in computer science in the theory of local computations. In 1991, Abello et al. asked for a classification of the computational complexity of deciding if an input graph covers a fixed target graph, in the ordinary setting (of graphs with only edges). Although many general results are known, the full classification is still open. In spite of that, we propose to study the more general case of covering graphs composed of normal edges (including multiedges and loops) and so-called semi-edges. Semi-edges are becoming increasingly popular in modern topological graph theory, as well as in mathematical physics. They also naturally occur in the local computation setting, since they are lifted to matchings in the covering graph. We show that the presence of semi-edges makes the covering problem considerably harder;e.g., it is no longer sufficient to specify the vertex mapping induced by the covering, but one necessarily has to deal with the edge mapping as well. We show some solvable cases and, in particular, completely characterize the complexity of the already very nontrivial problem of covering one- and two-vertex (multi)graphs with semi-edges. Our NP-hardness results are proven for simple input graphs, and in the case of regular two-vertex target graphs, even for bipartite ones. We remark that our new characterization results also strengthen previously known results for covering graphs without semi-edges, and they in turn apply to an infinite class of simple target graphs with at most two vertices of degree more than two. Some of the results are moreover proven in a m
Computing frequent iterasets is one of the most prominent problems in data mining. We introduce a new, related problem, called FREQSAT: given some itemset-interval pairs, does there exist a database such that for ever...
详细信息
ISBN:
(纸本)9781581138580
Computing frequent iterasets is one of the most prominent problems in data mining. We introduce a new, related problem, called FREQSAT: given some itemset-interval pairs, does there exist a database such that for every pair the frequency of the itemset falls in the interval? It is shown in this paper that FREQSAT is not finitely axiomatizable and that it is NP-complete. We also study cases in which other characteristics of the database are given as well. These characteristics can complicate FREQSAT even more. For example, when the maximal number of duplicates of a transaction is known, FREQSAT becomes PP-hard. We describe applications of FREQSAT in frequent itemset mining algorithms and privacy in data mining.
In this paper, we develop a new one-dimensional chaotic map, completely based on the logistic map: "New Logistic Map (NLM)";this new map expand the definition domain of logistic map and their utilization. Th...
详细信息
In multivariate interpolation problems, increase in both the number of independent variables of the sought function and the number of nodes appearing in the data set causes computational and mathematical difficulties....
详细信息
暂无评论