The physics-based simulation game Angry Birds has been heavily researched by the AI community over the past five years, and has been the subject of a popular AI competition that is currently held annually as part of a...
详细信息
The physics-based simulation game Angry Birds has been heavily researched by the AI community over the past five years, and has been the subject of a popular AI competition that is currently held annually as part of a leading AI conference. Developing intelligent agents that can play this game effectively has been an incredibly complex and challenging problem for traditional AI techniques to solve, even though the game is simple enough that any human player could learn and master it within a short time. In this paper we analyse how hard the problem really is, presenting several proofs for the computational complexity of Angry Birds. By using a combination of several gadgets within this game's environment, we are able to demonstrate that the decision problem of solving general levels for different versions of Angry Birds is either NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard. Proof of NP-hardness is by reduction from 3-SAT, whilst proof of PSPACE-hardness is by reduction from True Quantified Boolean Formula (TQBF). Proof of EXPTIME-hardness is by reduction from G2, a known EXPTIME-complete problem similar to that used for many previous games such as Chess, Go and Checkers. To the best of our knowledge, this is the first time that a single-player game has been proven EXPTIME-hard. This is achieved by using stochastic game engine dynamics to effectively model the real world, or in our case the physics simulator, as the opponent against which we are playing. These proofs can also be extended to other physics-based games with similar mechanics. (C) 2020 Elsevier B.V. All rights reserved.
We discuss the uniform computational complexity of the derivatives of Cinfinity-functions in the model of Ko and Friedman (Ko, complexity Theory of Real Functions, Birkhauser, Basel, 1991;Ko, Friedman, Theor. Comput. ...
详细信息
We discuss the uniform computational complexity of the derivatives of Cinfinity-functions in the model of Ko and Friedman (Ko, complexity Theory of Real Functions, Birkhauser, Basel, 1991;Ko, Friedman, Theor. Comput. Sci. 20 (1982) 323-352). We construct a polynomial time computable real function g epsilon C-infinity[-1,1] such that the sequence {\g((n))(0)\}(nepsilonN) is not bounded by any recursive function. On the other hand, we show that if f epsilon C-infinity[-1,1] is polynomial time computable and the sequence of the derivatives of f is uniformly polynomially bounded, i.e., \f((n))(x)\ is bounded by 2(p(n)) for all x epsilon [-1,1] for some polynomial p, then the sequence {f((n))}(nepsilonN) is uniformly polynomial time computable. (C) 2002 Elsevier Science BN. All rights reserved.
We study the computational complexity of various inverse problems in discrete tomography. These questions are motivated by demands from the material sciences for the reconstruction of crystalline structures from image...
详细信息
We study the computational complexity of various inverse problems in discrete tomography. These questions are motivated by demands from the material sciences for the reconstruction of crystalline structures from images produced by quantitative high resolution transmission electron microscopy. We completely settle the complexity status of the basic problems of existence (data consistency), uniqueness (determination), and reconstruction of finite subsets of the d-dimensional integer lattice Z(d) that are only accessible via their line sums (discrete X-rays) in some prescribed finite set of lattice directions. Roughly speaking, it turns out that for all d greater than or equal to 2 and for a prescribed but arbitrary set of m greater than or equal to 2 pairwise nonparallel lattice directions, the problems are solvable in polynomial time if m=2 and are NP-complete (or NP-equivalent) otherwise. (C) 1999 Elsevier Science B.V. All rights reserved.
In this paper, we analyse the makespan minimization problem in the two-processor flowshop environment, where the processing time of each job depends on its position in a sequence (a position dependent job processing t...
详细信息
In this paper, we analyse the makespan minimization problem in the two-processor flowshop environment, where the processing time of each job depends on its position in a sequence (a position dependent job processing time) that is equivalent to the number of previously processed jobs. Namely, if processing times of jobs increase with the number of processed jobs, then the aging (fatigue/deterioration) effect is modelled, whereas the non-increasing dependency describes the learning effect. We prove that the considered problem becomes strongly NP-hard if the processing time of each job is described by a piecewise linear function dependent on its position in a sequence;we analyse two types of functions: non-decreasing (aging) and non-increasing (learning). Furthermore, we describe the strong NP-hardness proof supporting method and elucidate correctness of our other strong NP-hardness proofs. Additionally, we provide a modification of the Johnson's algorithm and prove that it optimally solves the considered problem in the two-processor flowshop environment if job processing times are characterized by a common linear (non-increasing or non-decreasing) dependency on a job position. (C) 2013 Elsevier Inc. All rights reserved.
A central problem in graph mining is finding dense subgraphs, with several applications in different fields, a notable example being identifying communities. While a lot of effort has been put in the problem of findin...
详细信息
A central problem in graph mining is finding dense subgraphs, with several applications in different fields, a notable example being identifying communities. While a lot of effort has been put in the problem of finding a single dense subgraph, only recently the focus has been shifted to the problem of finding a set of densest subgraphs. An approach introduced to find possible overlapping subgraphs is the Top-k-Overlapping Densest Subgraphs problem. Given an integer k >= 1 and a parameter lambda > 0, the goal of this problem is to find a set of k dense subgraphs that may share some vertices. The objective function to be maximized takes into account the density of the subgraphs, the parameter lambda and the distance between each pair of subgraphs in the solution. The Top-k-Overlapping Densest Subgraphs problem has been shown to admit a 1/10-factor approximation algorithm. Furthermore, the computational complexity of the problem has been left open. In this paper, we present contributions concerning the approximability and the computational complexity of the problem. For the approximability, we present approximation algorithms that improve the approximation factor to 1/2, when k is smaller than the number of vertices in the graph, and to 2/3, when k is a constant. For the computational complexity, we show that the problem is NP-hard even when k = 3.
We create a mathematical framework for modeling trucks traveling in road networks, and we define a routing problem called the platooning problem. We prove that this problem is NP-hard, even when the graph used to repr...
详细信息
We create a mathematical framework for modeling trucks traveling in road networks, and we define a routing problem called the platooning problem. We prove that this problem is NP-hard, even when the graph used to represent the road network is planar. We present integer linear programming formulations for instances of the platooning problem where deadlines are discarded, which we call the unlimited platooning problem. These allow us to calculate fuel-optimal solutions to the platooning problem for large-scale, real-world examples. The problems solved are orders of magnitude larger than problems previously solved exactly in the literature. We present several heuristics and compare their performance with the optimal solutions on the German Autobahn road network. The proposed heuristics find optimal or near-optimal solutions in most of the problem instances considered, especially when a final local search is applied. Assuming a fuel reduction factor of 10% from platooning, we find fuel savings from platooning of 1-2% for as few as 10 trucks in the road network;the percentage of savings increases with the number of trucks. If all trucks start at the same point, savings of up to 9% are obtained for only 200 trucks. (C) 2015 Elsevier Ltd. All rights reserved.
We consider the computational complexity of languages of symbolic dynamical systems. In particular, we study complexity hierarchies and membership of the non-uniform class P/poly. We prove: 1. For every time-construct...
详细信息
We consider the computational complexity of languages of symbolic dynamical systems. In particular, we study complexity hierarchies and membership of the non-uniform class P/poly. We prove: 1. For every time-constructible, non-decreasing function t(n) = omega(n), there is a symbolic dynamical system with language decidable in deterministic time O(n(2)t(n)), but not in deterministic time o(t(n)). 2. For every space-constructible, non-decreasing function s(n) = omega(n), there is a symbolic dynamical system with language decidable in deterministic space O(s(n)), but not in deterministic space o(s(n)). 3. There are symbolic dynamical systems having hard and complete languages under <=(logs)(m) and <=(p)(m)-reduction for every complexity class above LOGSPACE in the backbone hierarchy (hence, P-complete, NP-complete, coNP-complete, PSPACE-complete, and EXPTIME-complete sets). 4. There are decidable languages of symbolic dynamical systems in P/poly for every alphabet of size vertical bar Sigma vertical bar >= 1. 5. There are decidable languages of symbolic dynamical systems not in P/poly iff the alphabet size is > 1. For the particular class of symbolic dynamical systems known as beta-shifts, we prove that: 1. For all real numbers beta > 1, the language of the beta-shift is in P/poly. 2. If there exists a real number beta > 1 such that the language of the beta-shift is NP-hard under <=(p)(T)-reduction, then the polynomial hierarchy collapses to the second level. As NP-harness under <=(p)(m)-reduction implies hardness under <=(p)(T)-reduction, this result implies that it is unlikely that a proof of existence of an NP-hard language of beta-shift will be forthcoming. 3. For every time-constructible, non-decreasing function t(n) >= n, there is a real number 1 < beta < 2 Such that the language of the beta-shift is decidable in time O(n(2)t (log n + 1)), but not in any proper time bound g(n) satisfying g(4(n)) = o(t(n)/16(n)). 4. For every space-constructible, non-decreasing fu
Mobile devices always demand low computational complexity because computing resources, such as processor and battery, in mobile are always limited. The bandwidth extension (BWE) algorithm in mobile audio codec standar...
详细信息
Mobile devices always demand low computational complexity because computing resources, such as processor and battery, in mobile are always limited. The bandwidth extension (BWE) algorithm in mobile audio codec standard of China was proposed to improve audio quality in mobile communication. But the computational complexity of the algorithm is too high to implement in mobile devices. By analyzing the BWE algorithm, we discover that the main reason of high computational complexity is the frequently usage of time-frequency transformation. Then a low computational complexity scheme is proposed, which include algorithm optimization and code optimization. The experiment results show that computation time consumption ratio of BWE module in encoder and decoder are decreased by 4.5 and 14.3 percentage points respectively, without reducing the overall audio codec subjective quality, which is be conductive to the algorithm implement in mobile audio field. (C) 2017 Elsevier Ltd. All rights reserved.
This paper introduces a novel methodology for developing low-complexity neural network (NN) based equalizers to address impairments in high-speed coherent optical transmission systems. We present a comprehensive explo...
详细信息
This paper introduces a novel methodology for developing low-complexity neural network (NN) based equalizers to address impairments in high-speed coherent optical transmission systems. We present a comprehensive exploration and comparison of deep model compression techniques applied to feed-forward and recurrent NN designs, assessing their impact on equalizer performance. Our investigation encompasses quantization, weight clustering, pruning, and other cutting-edge compression strategies. We propose and evaluate a Bayesian optimization-assisted compression approach that optimizes hyperparameters to simultaneously enhance performance and reduce complexity. Additionally, we introduce four distinct metrics (RMpS, BoP, NABS, and NLGs) to quantify computing complexity in various compression algorithms. These metrics serve as benchmarks for evaluating the relative effectiveness of NN equalizers when compression approaches are employed. The analysis is completed by evaluating the trade-off between compression complexity and performance using simulated and experimental data. By employing optimal compression techniques, we demonstrate the feasibility of designing a simplified NN-based equalizer surpassing the performance of conventional digital back-propagation (DBP) equalizers with only one step per span. This is achieved by reducing the number of multipliers through weighted clustering and pruning algorithms. Furthermore, we highlight that an NN-based equalizer can achieve better performance than the full electronic chromatic dispersion compensation block while maintaining a similar level of complexity. In conclusion, we outline remaining challenges, unanswered questions, and potential avenues for future research in this field.
Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing i...
详细信息
Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.
暂无评论