For an engineering design, tolerances in design parameters are selected so that within these tolerances, we guarantee the desired functionality. Feasible algorithms are known for solving the corresponding computationa...
详细信息
For an engineering design, tolerances in design parameters are selected so that within these tolerances, we guarantee the desired functionality. Feasible algorithms are known for solving the corresponding computational problems: the problem of finding tolerances that guarantee the given functionality, and the problem of checking whether given tolerances guarantee this functionality. In this paper, we show that in many practical problems, the problem of choosing the optimal tolerances can also be solved by a feasible algorithm. We prove that a slightly different problem of finding the optimal tolerance revision is, in contrast, computationally difficult (namely, NP-hard). We also show that revision programming algorithms can be used to check whether a given combination of tolerance changes is optimal under given constraints-and even to find a combination that is optimal under given constraints.
The authors propose a new method based on spatial cumulants for estimating the parameters of multiple near-field and far-field sources. The Toeplitz property used in some studies is not applicable to fourth-order stat...
详细信息
The wavelength-based machine is a computational model working with light and optical devices. Wavelength-Based machine assumes that it can breakdown light to several very small pieces and act on them simultaneously to...
详细信息
The wavelength-based machine is a computational model working with light and optical devices. Wavelength-Based machine assumes that it can breakdown light to several very small pieces and act on them simultaneously to achieve efficiency in computation. In this paper, first we introduce the wavelength-based machine. Then, we define two new operations: concentration and double concentration operations, which give the wavelength-based machine the ability to check the emptiness of one and two light rays. Both of the concentration and double concentration operations are implemented by white-black imaging systems. In this paper, we compare the computational power of P-uniform wavelength-based machines with and without concentration operations. We show that polynomial size wavelength-based machines with concentration operation compute exactly NP languages, thus, adding the concentration operation, does not increase the computational power of wavelength-based machines. In contrast, polynomial time wavelength-based machines with concentration operation compute exactly PSPACE languages, however, polynomial time wavelength-based machines without concentration operations only compute NP languages.
We propose an approach to reduce both computational complexity and data storage requirements for the online positioning stage of a fingerprinting-based indoor positioning system (FIPS) by introducing segmentation of t...
详细信息
ISBN:
(纸本)9783319714707;9783319714691
We propose an approach to reduce both computational complexity and data storage requirements for the online positioning stage of a fingerprinting-based indoor positioning system (FIPS) by introducing segmentation of the region of interest (RoI) into sub-regions, sub-region selection using a modified Jaccard index, and feature selection based on randomized least absolute shrinkage and selection operator (LASSO). We implement these steps into a Bayesian framework of position estimation using the maximum a posteriori (MAP) principle. An additional benefit of these steps is that the time for estimating the position, and the required data storage are virtually independent of the size of the RoI and of the total number of available features within the RoI. Thus the proposed steps facilitate application of FIPS to large areas. Results of an experimental analysis using real data collected in an office building using a Nexus 6P smart phone as user device and a total station for providing position ground truth corroborate the expected performance of the proposed approach. The positioning accuracy obtained by only processing 10 automatically identified features instead of all available ones and limiting position estimation to 10 automatically identified sub-regions instead of the entire RoI is equivalent to processing all available data. In the chosen example, 50% of the errors are less than 1.8 m and 90% are less than 5 m. However, the computation time using the automatically identified subset of data is only about 1% of that required for processing the entire data set.
A finite dynamical system is a system consisting of some finite number of objects that take upon a value from some domain as a state, in which after initialization the states of the objects are updated based upon the ...
详细信息
A finite dynamical system is a system consisting of some finite number of objects that take upon a value from some domain as a state, in which after initialization the states of the objects are updated based upon the states of the other objects and themselves according to a certain update schedule. This paper studies a subclass of finite dynamical systems the synchronous Boolean finite dynamical system(synchronous BFDS, for short), where the states are Boolean and the state update takes place in discrete time and at the same on all objects. The paper is concerned with three problems, Convergence, Path Intersection, and Cycle Length, of the synchronous BFDS in which the state update functions (or the local state transition functions) are chosen from a predetermined finite basis of Boolean functions B. The paper results characterize their computational complexity. (C) 2017 Elsevier Inc. All rights reserved.
Single-channel nonuniform sampling (SNS) is a Compressed Sensing (CS) approach that allows sub-Nyquist sampling of frequency sparse signals. The relatively simple architecture, comprising one wide band sampling channe...
详细信息
Single-channel nonuniform sampling (SNS) is a Compressed Sensing (CS) approach that allows sub-Nyquist sampling of frequency sparse signals. The relatively simple architecture, comprising one wide band sampling channel, makes it an attractive solution for applications such as signal analyzers and telecommunications. However, a high computational cost of the SNS signal reconstruction is an obstacle for real-time applications. This paper proposes to emulate multi-coset sampling (MCS) in SNS acquisition as a means to decrease the computational costs. Such an emulation introduces performance-complexity tradeoffs due to the difference of the SNS and MCS models. We investigate these tradeoffs with numerical simulations and theoretical assessments of the reconstruction complexity in multi-band signal scenarios. These scenarios include different numbers, different widths and positions of the frequency bands and different levels of noise in the signals. For the SNS reconstruction, we consider the accelerated iterative hard thresholding algorithm;for the MCS reconstruction, the multiple signal classification and focal underdetermined system solver algorithms are used. The proposed emulation reduces the computational complexity up to several orders of magnitude. For one of the scenarios, the reconstruction quality slightly decreases. For the other scenarios, the reconstruction quality is either preserved or improved. (C) 2016 Published by Elsevier B.V.
The computational complexity evaluation is necessary for software defined Forward Error Correction (FEC) decoders. However, currently there are a limited number of literatures concerning on the FEC complexity evaluati...
详细信息
The computational complexity evaluation is necessary for software defined Forward Error Correction (FEC) decoders. However, currently there are a limited number of literatures concerning on the FEC complexity evaluation using analytical methods. In this paper, three high efficient coding schemes including Turbo, QC-LDPC and Convolutional code (CC) are investigated. The hardware-friendly decoding pseudo-codes are provided with explicit parallel execution and memory access procedure. For each step of the pseudo-codes, the parallelism and the operations in each processing element are given. Based on it the total amount of operations is derived. The comparison of the decoding complexity among these FEC algorithms is presented, and the percentage of each computation step is illustrated. The requirements for attaining the evaluated results and reference hardware platforms are provided. The benchmarks of state-of-the-art SDR platforms are compared with the proposed evaluations. The analytical FEC complexity results are beneficial for the design and optimization of high throughput software defined FEC decoding platforms.
Cellular Automata (CA) are a well-established bio-inspired model of computation that has been successfully applied in several domains. In the recent years the importance of modelling real systems more accurately has s...
详细信息
Cellular Automata (CA) are a well-established bio-inspired model of computation that has been successfully applied in several domains. In the recent years the importance of modelling real systems more accurately has sparkled a new interest in the study of asynchronous CA (ACA). When using an ACA for modelling real systems, it is important to determine the fidelity of the model, in particular with regards to the existence (or absence) of certain dynamical behaviors. This paper is concerned with two big classes of problems: reachability and preimage existence. For each class, both an existential and a universal version are considered. The following results are proved. Reachability is PSPACE-complete, its resource bounded version is NP-complete (existential form) or coNP-complete (universal form). The preimage problem is dimension sensitive in the sense that it is NL-complete (both existential and universal form) for one-dimensional ACA while it is NP-complete (existential version) or Pi(P)(2)-complete (universal version) for higher dimension.(C) 2015 Elsevier B.V. All rights reserved.
This paper studies the complexity of p-calculus processes with respect to the quantity of transitions caused by an incoming message. First we propose a typing system for integrating Bellantoni and Cook's character...
详细信息
ISBN:
(纸本)9781450355834
This paper studies the complexity of p-calculus processes with respect to the quantity of transitions caused by an incoming message. First we propose a typing system for integrating Bellantoni and Cook's characterisation of polynomially-bound recursive functions into Deng and Sangiorgi's typing system for termination. We then define computational complexity of distributed messages based on Degano and Priami's causal semantics, which identifies the dependency between interleaved transitions. Next we apply a syntactic flow analysis to typable processes to ensure the computational bound of distributed messages. We prove that our analysis is decidable for a given process;sound in the sense that it guarantees that the total number of messages causally dependent of an input request received from the outside is bounded by a polynomial of the content of this request;and complete which means that each polynomial recursive function can be computed by a typable process.
Kill-all Go is a variant of Go in which Black tries to capture all white stones, while White tries to survive. We consider computational complexity of Kill-all Go with two rulesets, Chinese rules and Japanese rules. W...
详细信息
暂无评论