In the present paper we have introduced a new sorting algorithm named Partition Sort and performed its empirical study with necessary theoretical analysis. This algorithm was subjected to data coming from various stan...
详细信息
ISBN:
(纸本)9783642257339
In the present paper we have introduced a new sorting algorithm named Partition Sort and performed its empirical study with necessary theoretical analysis. This algorithm was subjected to data coming from various standard probability distributions. Average Case analysis revealed that for some standard distributions it worked even more efficiently than the popular Hoare's quicksort algorithm. The relative order of execution time, for different distributions, in descending order came out to be: Discrete Uniform < Binomial < Poisson < Standard Normal < Continuous Uniform.
Searching for motifs in graphs has become a crucial problem in the analysis of biological networks. In the context of metabolic network analysis, Lacroix et al. [V. Lacroix, C.G. Fernandes, M.-F. Sagot, IEEE/ACM Trans...
详细信息
Searching for motifs in graphs has become a crucial problem in the analysis of biological networks. In the context of metabolic network analysis, Lacroix et al. [V. Lacroix, C.G. Fernandes, M.-F. Sagot, IEEE/ACM Transactions on Computational Biology and Bioinformatics 3 (4) (2006) 360-368] introduced the NP-hard general problem of finding occurrences of motifs in vertex-colored graphs, where a motif M is a multiset of colors and an occurrence of M in a vertex-colored graph G, called the target graph, is a subset of vertices that induces a connected graph and the multiset of colors induced by this subset is exactly the motif. Pursuing the line of research pioneered by Lacroix et al. and aiming at dealing with approximate solutions, we consider in this paper the above-mentioned problem in two of its natural optimization forms, referred hereafter as the Min-CC and the Maximum Motif problems. The Min-CC problem seeks for an occurrence of a motif M in a vertex-colored graph G that induces a minimum number of connected components whereas the Maximum Motif problem is concerned with finding a maximum cardinality submotif M subset of M that occurs as a connected motif in G. We prove the Min-CC problem to be APX-hard even in the extremal case where the motif is a set and the target graph is a path. We complement this result by giving a polynomialtime algorithm in case the motif is built upon a fixed number of colors and the target graph is a path. Also, extending [M. Fellows, G. Fertin, D. Hermelin, S. Vialette, in: Proc. 34th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 4596, Springer, 2007, pp. 340-351], we prove the Min-CC problem to be fixed-parameter tractable when parameterized by the size of the motif, and we give a faster algorithm in case the target graph is a tree. Furthermore, we prove the MIN-CC problem for trees not to be approximable within ratio c logn for some constant c > 0, where n is the or
We present a robust localization algorithm for multiple radiation sources using a network of sensors under random underlying physical processes and measurement errors. The proposed solution uses a hybrid formulation o...
详细信息
ISBN:
(纸本)9780769543642
We present a robust localization algorithm for multiple radiation sources using a network of sensors under random underlying physical processes and measurement errors. The proposed solution uses a hybrid formulation of particle filter and mean-shift techniques to achieve several important features that address major challenges faced by existing localization algorithms. First, our algorithm is able to maintain a constant number of estimation (source) parameters even as the number of radiation sources K increases. In existing algorithms, the number of estimation parameters is proportional to K and thus the algorithm complexity grows exponentially with K. Second, to decide the number of sources K, existing algorithms either require the information to be known in advance or rely on expensive statistical estimations that do not scale well with K. Instead, our algorithm efficiently learns the number of sources from the estimated source parameters. Third, when obstacles are present, our algorithm can exploit the obstacles to achieve better isolation between the source signatures, thereby increasing the localization accuracy in complex deployment environments. In contrast, incompletely specified obstacles will significantly degrade the accuracy of existing algorithms due to their unpredictable effects on the source signatures. We present extensive simulation results to demonstrate that our algorithm has robust performance in complex deployment environments, and its efficiency is scalable to many radiation sources in these environments.
In this paper we have studied the performance of rate. 1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a f...
详细信息
In this paper we have studied the performance of rate. 1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a feedback. controlled system. Several sets of closed. loop simulations are performed to demonstrate the benefit of information. based chaos system. In particular, it is demonstrated that two varieties of an information. based systems provide improved performance over all the encoder choices when hyperchaos states are utilized. Special attention was paid to the algorithmic complexity of the systems for an entire class of rate. 1/2 encoders. The decoder is able to recover the encrypted data and is able to reasonably estimate the bit error rate for different signal strengths under a noisy AWGN channel. This indicates that the encoder can update the information map in real time to compensate for changing data for both chaotic and hyperchaotic states. This is the evidence that occasional changes in the data stream can be handled by the decoder in a real time application. Numerical evidence indicates algorithmic complexity associated with the hyperchaotic. encrypted and convolutionally-encoded data, provide better security along with the increase in the error correcting capacity of the decoder. (C) 2010 Published by Elsevier B.V.
The aim of this paper is to undertake an experimental investigation of the trade-offs between program-size and time computational complexity. The investigation includes an exhaustive exploration and systematic study o...
详细信息
The aim of this paper is to undertake an experimental investigation of the trade-offs between program-size and time computational complexity. The investigation includes an exhaustive exploration and systematic study of the functions computed by the set of all 2-color Turing machines with 2 states -we will write (2,2)- and 3 states -we write (3,2)- with particular attention to the runtimes, space-usages and patterns corresponding to the computed functions when the machines have access to larger resources (more states). We report that the average runtime of Turing machines computing a function increases -with a probability close to one- with the number of states, indicating that machines not terminating (almost) immediately tend to occupy all the resources at hand. We calculated all time complexity classes to which the algorithms computing the functions found in both (2,2) and (3,2) belong to, and made comparison among these classes. For a selection of functions the comparison is extended to (4,2). Our study revealed various structures in the micro-cosmos of small Turing Machines. Most notably we observed "phase-transitions" in the halting-probability distribution. Moreover, it is observed that small initial segments fully define a function computed by a TM.
A theorem of Brudno says that the entropy production of classical ergodic information sources equals the algorithmic complexity per symbol of almost every sequence emitted by such sources. The recent advances in the t...
详细信息
We use proof-nets to study the algorithmic complexity of the derivability problem for some fragments of the Lambek calculus. We prove the NP-completeness of this problem for the unidirectional fragment and the product...
详细信息
We use proof-nets to study the algorithmic complexity of the derivability problem for some fragments of the Lambek calculus. We prove the NP-completeness of this problem for the unidirectional fragment and the product-free fragment, and also for versions of these fragments that admit empty antecedents.
In this paper we have studied the performance of rate-1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a fe...
详细信息
In this paper we have studied the performance of rate-1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a feedback-controlled system. Several sets of closed-loop simulations are performed to demonstrate the benefit of information-based chaos system. In particular, it is demonstrated that two varieties of an information-based systems provide improved performance over all the encoder choices when hyperchaos states are utilized. Special attention was paid to the algorithmic complexity of the systems for an entire class of rate-1/2 encoders. The decoder is able to recover the encrypted data and is able to reasonably estimate the bit error rate for different signal strengths under a noisy AWGN channel. This indicates that the encoder can update the information map in real time to compensate for changing data for both chaotic and hyperchaotic states. This is the evidence that occasional changes in the data stream can be handled by the decoder in a real time application. Numerical evidence indicates algorithmic complexity associated with the hyperchaotic-encrypted and convolutionally-encoded data, provide better security along with the increase in the error correcting capacity of the decoder.
Promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal ...
详细信息
Promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal step in SD is the l(2)-norm. It was, however, recently observed that using the l(infinity)-norm instead reduces the hardware complexity of SD considerably at only a marginal performance loss. These savings result from a reduction in the length of the critical path in the circuit and the silicon area required for metric computation, but are also, as observed previously through simulation results, a consequence of a reduction in the computational (i.e., algorithmic) complexity. The aim of this paper is an analytical performance and computational complexity analysis of l(infinity)-norm SD. For independent and identically distributed (i.i.d.) Rayleigh fading MIMO channels, we show that l(infinity)-norm SD achieves full diversity order with an asymptotic SNR gap, compared to l(2)-norm SD, that increases at most linearly in the number of receive antennas. Moreover, we provide a closed l(infinity)-form expression for the computational complexity of-norm SD based on which we establish that its complexity scales exponentially in the system size. Finally, we characterize the tree pruning behavior of l(infinity)-norm SD and show that it behaves fundamentally different from that of l(2)-norm SD.
The concept of complexity as considered in terms of its algorithmic definition proposed by G. J. Chaitin and A. N. Kolmogorov is revisited for the dynamical complexity of music. When music pieces are cast in the form ...
详细信息
The concept of complexity as considered in terms of its algorithmic definition proposed by G. J. Chaitin and A. N. Kolmogorov is revisited for the dynamical complexity of music. When music pieces are cast in the form of time series of pitch variations, concepts of dynamical systems theory can be used to de. ne new quantities such as the dimensionality as a measure of the global temporal dynamics of a music piece, and the Shanon entropy as an evaluation of its local dynamics. When these quantities are computed explicitly for sequences sampled in the music literature from the 18th to the 20th century, no indication is found of a systematic increase in complexity paralleling historically the evolution of classical western music, but the analysis suggests that the fractional nature of art might have an intrinsic value of more general significance.
暂无评论