A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which h...
详细信息
A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N-2) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise.
The field of computational complexity is concerned both with the intrinsic hardness of computational problems and with the efficiency of algorithms to solve them. Given such a problem, normally one designs an algorith...
详细信息
The field of computational complexity is concerned both with the intrinsic hardness of computational problems and with the efficiency of algorithms to solve them. Given such a problem, normally one designs an algorithm to solve it and sets about establishing bounds on the algorithm's performance as functions of the relevant input-size descriptors, particularly upper bounds expressed via the big-oh notation. In some cases, however, especially those arising in the fields of experimental algorithmics and optimization, one may have to resort to performance data on a given set of inputs in order to figure out the algorithm's big-oh profile. In this note, we are concerned with the question of how many candidate expressions may have to be taken into account in such cases. We show that, even if we only considered upper bounds given by polynomials, the number of possibilities could be arbitrarily large for two or more descriptors. This is unexpected, given the available body of examples on algorithmic efficiency, and underlines the importance of careful and meticulous criteria. It also serves to illustrate the many facets of the big-oh notation, as well as its counter-intuitive twists.
For each pair of algebraic numbers (x, y), the complexity of computing the Tutte polynomial T(G;x, y) of a planar graph G is determined. This computation is found to be (#P) over bar -complete except when ( x - 1)( y ...
详细信息
For each pair of algebraic numbers (x, y), the complexity of computing the Tutte polynomial T(G;x, y) of a planar graph G is determined. This computation is found to be (#P) over bar -complete except when ( x - 1)( y - 1) = 1, 2 or when ( x, y) is one of (1, 1), (- 1,- 1), ( j, j(2)), or ( j(2), j), where j = e(2 pi i/ 3), in which case it is polynomial time computable. A corollary gives the computational complexity of various enumeration problems for planar graphs.
In this article, we study the problem of latency and reliability trade-off in ultra-reliable low-latency communication (URLLC) in the presence of decoding complexity constraints. We consider linear block encoded codew...
详细信息
In this article, we study the problem of latency and reliability trade-off in ultra-reliable low-latency communication (URLLC) in the presence of decoding complexity constraints. We consider linear block encoded codewords transmitted over a binary-input AWGN channel and decoded with order-statistic (OS) decoder. We first investigate the performance of OS decoders as a function of decoding complexity and propose an empirical model that accurately quantifies the corresponding trade-off. Next, a consistent way to compute the aggregate latency for complexity constrained receivers is presented, where the latency due to decoding is also included. It is shown that, with strict latency requirements, decoding latency cannot be neglected in complexity constrained receivers. Next, based on the proposed model, several optimization problems, relevant to the design of URLLC systems, are introduced and solved. It is shown that the decoding time has a drastic effect on the design of URLLC systems when constraints on decoding complexity are considered. Finally, it is also illustrated that the proposed model can closely describe the performance versus complexity trade-off for other candidate coding solutions for URLLC such as tail-biting convolutional codes, polar codes, and low-density parity-check codes.
Liberatore and Schaerf (Proceedings of the ECAI' 98, 1998) give a proof that model checking for propositional normal default theories is in Delta(2)(P) and Delta(2)(P) [O(log n)]-hard. However, the precise complex...
详细信息
Liberatore and Schaerf (Proceedings of the ECAI' 98, 1998) give a proof that model checking for propositional normal default theories is in Delta(2)(P) and Delta(2)(P) [O(log n)]-hard. However, the precise complexity is left as an open problem. We solve this problem by proving that model checking for normal default theories is complete for Delta(2)(P)[O(logn)]. This is the class of decision problems solvable in polynomial time with a logarithmic number of calls to an oracle in NP. Additionally, we analyse the computational cost of model checking w.r.t. weak extensions, stable expansions and N-expansions and take a look at the complexity of model checking for disjunction-free default theories. Furthermore, we show that not only for disjunction-free default theories, but also for a larger class of default theories, which we call default theories in extended Horn normal form, the complexity of model checking is, in the case of normal default theories, tractable. Additionally, the complexity results are used to draw some interesting conclusions on translatability issues. In particular, there exists no function from default logic into logic programming which is polynomial, faithful and modular unless coNP = Sigma(2)(P). Finally, we give an overview of our results concerning model checking in case of disjunctive default logic and stationary default logic. (C) 2002 Elsevier Science B.V. All rights reserved.
Though the noise removal capability of multivariate median filters has been carefully investigated, a comprehensive analysis of their complexity is still missing. In this work, the complexity of the most commonly used...
详细信息
Though the noise removal capability of multivariate median filters has been carefully investigated, a comprehensive analysis of their complexity is still missing. In this work, the complexity of the most commonly used multivariate median filters is thoroughly analyzed. For each filter theoretical results are derived and validated against experimental data, proving that computational complexity depends mainly on the approach adopted to sort multivariate samples. Algorithms based on marginal ordering are very fast, whereas the use of an ordering scheme based on the aggregate sum of distances leads to very slow algorithms. An intermediate behavior is observed for filters relying on reduced ordering. A fast algorithm for the implementation of the vector median based on 1-norm is also described which significantly reduces the complexity of this filter. (C) 1998 Elsevier Science B.V. All rights reserved.
In this paper we study a class of uncertainty model validation/invalidation problems for linear discrete-time systems. We consider models described by a linear fractional transformation (LFT) of modeling uncertainties...
详细信息
In this paper we study a class of uncertainty model validation/invalidation problems for linear discrete-time systems. We consider models described by a linear fractional transformation (LFT) of modeling uncertainties, which are structured, time invariant or time varying, and are bounded in H(infinity), or l(1) norms. The experimental data available for use in invalidation are either input-output time series observations or frequency response measurements. We analyze the computational complexity associated with these problems. While earlier work showed that for LFT models with an unstructured uncertainty the invalidation problems can be reduced to one of solving linear matrix inequalities, and hence may be tackled readily using well-developed numerical methods, in this paper we provide a simple proof to show that in the presence of structured uncertainties they are all NP-hard with respect to the number of uncertainty blocks, suggesting that the problems are inherently intractable from a computational standpoint. Additionally, we also demonstrate that the problems do become tractable when the LFT model description is changed to an additive one, leading to LMI-based invalidation tests similar to those obtained elsewhere. These results indicate that the main source of the computational difficulty lies in the LFT description together with the structured nature of the uncertainties. (C) 1998 Elsevier Science B.V. All rights reserved.
The planning and scheduling of container terminal logistics systems (CTLS) are the multiobjective and multiple strong constraints combinatorial optimization challenges under the uncertain environments, and those are p...
详细信息
The planning and scheduling of container terminal logistics systems (CTLS) are the multiobjective and multiple strong constraints combinatorial optimization challenges under the uncertain environments, and those are provided with high goal orientation, dynamics, context-sensitivity, coupling, timeliness, and complexity. The increasingly sophisticated decision-making for CTLS is one of the most pressing problems for the programming and optimization method available. This paper discusses CTLS in terms of logistics generalized computation complexity based on computational thinking, great principles of computing, and computational lens, which three are abbreviated with 3CTGPL, and makes a definition of container terminal oriented logistics generalized computational complexity (CTO-LGCC) and container terminal logistics generalized computation comprehensive performance perspective (CTL-GCCPP) from the dimensions of time, space, communication, processor, and memory access. Both can analyze, generalize, migrate, translate, localize, modificate, and evaluate the above-complicated problems and lay solid foundations and establish a feedback improvement framework for the computational model and scheduling algorithms of the CTLS, which is an essential complement to the modeling and optimization methodology and solutions to CTLS with computational logistics. Finally, aimed at the logistics service cases for a large-scale container terminal, the simulation is designed and implemented for different scheduling algorithms, and the qualitative and quantitative comprehensive analysis is executed for the concomitant CTO-LGCC that demonstrates and verifies the feasibility and credibility of the CTO-LGCC and CTL-GCCPP from the viewpoint of the practice of container terminal decision-making support on the tactical level.
Random code is a rateless erasure code that can reconstruct the original message of k symbols from any k + 10 encoded symbols with high probability of complete decoding (PCD), i.e. 99.9% successful decoding, irrespect...
详细信息
Random code is a rateless erasure code that can reconstruct the original message of k symbols from any k + 10 encoded symbols with high probability of complete decoding (PCD), i.e. 99.9% successful decoding, irrespective of the message length, k. Nonetheless, random code is inefficient in reconstructing short messages. For example, a message of k = 10 symbols requires k + 10 = 20 encoded symbols, i.e. two times the original message length in order to achieve high PCD. In this study, the authors propose micro-random code that encodes and decodes the original message using symbols of smaller dimensions, namely micro symbols. The authors' analysis and numerical simulations show that micro-random code achieves high PCD with only k + 1 encoded symbols. As the trade-off for such a gain, the number of steps for decoding increases exponentially with each incrementing segmentation factor, . In addition, the numerical results show that the decoding time increases by about 400% at = 10, depending on the processing power of the system.
The prediction of intrinsically disordered proteins is a hot research area in *** to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to p...
详细信息
The prediction of intrinsically disordered proteins is a hot research area in *** to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational *** this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of *** our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our *** addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction ***,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et *** of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://***)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
暂无评论