It is known that a nontrivial automorphism on a given graph is computed by using any oracle that computes a pair of vertices (u, v) such that u is mapped to v by some nontrivial automorphism. In this paper, we conside...
详细信息
It is known that a nontrivial automorphism on a given graph is computed by using any oracle that computes a pair of vertices (u, v) such that u is mapped to v by some nontrivial automorphism. In this paper, we consider a weaker oracle acting as follows. For a given graph, the oracle returns a pair (v, b) of a vertex v and a bit b is an element of {0, 1} with the promise that if it returns (u. 0), then the vertex v is fixed by some nontrivial automorphism, but if it returns (v. 1), then the vertex v is moved by some nontrivial automorphism, provided that the given graph has a nontrivial automorphism. We here note that the oracle may return an arbitrary pair as its answer in case that the given graph has no nontrivial automorphism. We then show a stronger result that such an oracle is still powerful enough to compute a nontrivial automorphism. We also show that a similar result holds for RightGA, a GA-complete problem. We further investigate the computational complexity of computing a partial solution for PrefixGA which is known to be GI-complete. For this problem, we show that, when we consider any oracle similar to one mentioned above, the oracle does not help LIS to solve PreflxGA unless GI <=(P)(T) GA. (C) 2009 Elsevier B.V. All rights reserved.
Algorithmic issues of simple object detection problems in the context of a system consisting of a finite set of sensors that monitor a workspace are studied. Each sensor detects the presence of only a certain subset o...
详细信息
Algorithmic issues of simple object detection problems in the context of a system consisting of a finite set of sensors that monitor a workspace are studied. Each sensor detects the presence of only a certain subset of a given set of objects O. Given that an object has been detected by a subset of sensors, the detection problem deals with identifying if the object in the workspace is a member of O;if yes, computing the maximal set of such members. A conceptual graph structure called the detection network that yields efficient algorithms for the detection problem for combinational, message-based, sequential and parallel computing systems is proposed. The problem of computing a detection network with the minimum number of edges is shown to be computationally intractable is shown, and polynomial-time approximation algorithms are presented. Then sequential algorithms to solve the detection problem with and without preprocessing are presented. Also, parallel algorithms on shared memory systems and hypercube-based message passing systems are discussed. Finally, it is shown that the problem of detecting multiple objects is computationally intractable.
The wavelength-based machine is a computational model working with light and optical devices. Wavelength-Based machine assumes that it can breakdown light to several very small pieces and act on them simultaneously to...
详细信息
The wavelength-based machine is a computational model working with light and optical devices. Wavelength-Based machine assumes that it can breakdown light to several very small pieces and act on them simultaneously to achieve efficiency in computation. In this paper, first we introduce the wavelength-based machine. Then, we define two new operations: concentration and double concentration operations, which give the wavelength-based machine the ability to check the emptiness of one and two light rays. Both of the concentration and double concentration operations are implemented by white-black imaging systems. In this paper, we compare the computational power of P-uniform wavelength-based machines with and without concentration operations. We show that polynomial size wavelength-based machines with concentration operation compute exactly NP languages, thus, adding the concentration operation, does not increase the computational power of wavelength-based machines. In contrast, polynomial time wavelength-based machines with concentration operation compute exactly PSPACE languages, however, polynomial time wavelength-based machines without concentration operations only compute NP languages.
In many cases, if something is done once, it can be reused faster or by a lower cost. Such dependencies occur in data processing, software development, project planning, machine purchasing or construction plans and ma...
详细信息
In many cases, if something is done once, it can be reused faster or by a lower cost. Such dependencies occur in data processing, software development, project planning, machine purchasing or construction plans and many others. In this paper, we express related problems as a vector sequencing problem and prove it to be strongly NP-hard. To solve it, we provide an exact branch and bound method. Furthermore, efficient heuristic and parallel metaheuristic algorithms are constructed that are based on a fast neighborhood search. Their efficiency is verified during an extensive computational experiments. (C) 2016 Elsevier Ltd. All rights reserved.
We show that the popular pencil puzzle NURIKABE is intractable from the computational complexity point of view, that is, it is NP-complete, even when the involved numbers are 1 and 2 only. To this end, we show how to ...
详细信息
We show that the popular pencil puzzle NURIKABE is intractable from the computational complexity point of view, that is, it is NP-complete, even when the involved numbers are 1 and 2 only. To this end, we show how to simulate Boolean gates by the puzzle under consideration. Moreover, we also study some NURIKABE variants, which remain NP-complete, too.
Sofic shifts are symbolic dynamical systems defined by the set of bi-infinite sequences on an edge-labeled directed graph, called a presentation. We study the computational complexity of an array of natural decision p...
详细信息
Sofic shifts are symbolic dynamical systems defined by the set of bi-infinite sequences on an edge-labeled directed graph, called a presentation. We study the computational complexity of an array of natural decision problems about presentations of sofic shifts, such as whether a given graph presents a shift of finite type, or an irreducible shift;whether one graph presents a subshift of another;and whether a given presentation is minimal, or has a synchronizing word. Leveraging connections to automata theory, we first observe that these problems are all decidable in polynomial time when the given presentation is irreducible (strongly connected), via algorithms both known and novel to this work. For the general (reducible) case, however, we show they are all PSPACE-complete. All but one of these problems (subshift) remain polynomial-time solvable when restricting to synchronizing deterministic presentations. We also study the size of synchronizing words and synchronizing deterministic presentations. (c) 2022 Elsevier B.V. All rights reserved.
We analyze and compare the computational complexity of different simulation strategies for Monte Carlo in the setting of classically scaled population processes. This allows a range of widely used competing strategies...
详细信息
We analyze and compare the computational complexity of different simulation strategies for Monte Carlo in the setting of classically scaled population processes. This allows a range of widely used competing strategies to be judged systematically. Our setting includes stochastically modeled biochemical systems. We consider the task of approximating the expected value of some path functional of the state of the system at a fixed time point. We study the use of standard Monte Carlo when samples are produced by exact simulation and by approximation with tau-leaping or an Euler-Maruyama discretization of a diffusion approximation. Appropriate modifications of recently proposed multilevel Monte Carlo algorithms are also studied for the tau-leaping and Euler-Maruyama approaches. In order to quantify computational complexity in a tractable yet meaningful manner, we consider a parameterization that, in the mass action chemical kinetics setting, corresponds to the classical system size scaling. We base the analysis on a novel asymptotic regime where the required accuracy is a function of the model scaling parameter. Our new analysis shows that, under the specific assumptions made in the manuscript, if the bias inherent in the diffusion approximation is smaller than the required accuracy, then multilevel Monte Carlo for the diffusion approximation is most efficient, besting multilevel Monte Carlo with tau-leaping by a factor of a logarithm of the scaling parameter. However, if the bias of the diffusion model is greater than the error tolerance or if the bias cannot be bounded analytically, multilevel versions of tau-leaping are often the optimal choice.
In this paper, we solve a long-standing problem that has been of interest since about 1988. The problem in general is to decide whether or not it is possible to partition the vertices of a graph into k distinct non-em...
详细信息
In this paper, we solve a long-standing problem that has been of interest since about 1988. The problem in general is to decide whether or not it is possible to partition the vertices of a graph into k distinct non-empty sets A(0), A(1),...,A(k-1), such that the vertices in A(i) are independent and there is at least one edge between the pair of sets A(i) and A((i+1)mod k), for all i=0, 1, 2,..., k-1, k>2, and there is no edge between any other pair of sets. Determining the computational complexity of this problem, for any value of even kgreater than or equal to6, has been of interest since about 1988 to various people, including Pavol Hell and Jaroslav Nesetril. We show in this paper that the problem is NP-complete, for all even kgreater than or equal to6. We study the problem as the compaction problem for an irreflexive k-cycle. (C) 2003 Elsevier Inc. All rights reserved.
Murty in a recent paper has shown that the computational effort required to solve a linear complementarity problem (LCP), by either of the two well known complementary pivot methods is not bounded above by a polynomia...
详细信息
Murty in a recent paper has shown that the computational effort required to solve a linear complementarity problem (LCP), by either of the two well known complementary pivot methods is not bounded above by a polynomial in the size of the problem. In that paper, by constructing a class of LCPs—one of ordern forn ≥ 2—he has shown that to solve the problem of ordern, either of the two methods goes through 2n pivot steps before *** that paper leaves it as an open question to show whether or not the same property holds if the matrix,M, in the LCP is positive definite and symmetric. The class of LCPs in whichM is positive definite and symmetric is of particular interest because of the special structure of the problems, and also because they appear in many practical *** this paper, we study the computational growth of each of the two methods to solve the LCP, (q, M), whenM is positive definite and symmetric and obtain similar results.
The computational complexity of the meshless local Petrov-Galerkin method (MLPG) has been analyzed and compared with the finite difference (FDM) and finite element methods (FEM) from the user point of view. Theoretica...
详细信息
The computational complexity of the meshless local Petrov-Galerkin method (MLPG) has been analyzed and compared with the finite difference (FDM) and finite element methods (FEM) from the user point of view. Theoretically, MLPG is the most complex of the three methods. Experimental results show that MLPG, with appropriately selected integration order and dimensions of support and quadrature domains, achieves similar accuracy to that of FEM. The measurements of parallel complexity and speed-up indicate that parallel MLPG scales well on larger systems. The normalized computational complexity makes FEM the best choice. MLPG remains competitive if human assistance is needed for meshing. (C) 2008 Elsevier Ltd. All rights reserved.
暂无评论