A recently developed algorithm that is based on symbolic dynamics and computation of the normalized algorithmic complexity (C-alpha) was applied to basket-catheter mapping of the atrial fibrillation (AF). The aim of o...
详细信息
A recently developed algorithm that is based on symbolic dynamics and computation of the normalized algorithmic complexity (C-alpha) was applied to basket-catheter mapping of the atrial fibrillation (AF). The aim of our study was to analyze the spatial distribution of the C-alpha during AF and effects of propafenone on this distribution. During right atrial mapping in 25 patients with AF 31 intra-atrial and 1 surface bipolar channels were acquired. The anatomical location of the intra-atrial electrodes was defined fluoroscopically. C-alpha was calculated for a moving window (size: 2000 points;step 500 points). Generated C-alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C-alpha distribution was analyzed using the Friedman-test (intra-individually) and Kruskall-Wallis-H-test (inter- individually). A value of p=0.05 was set for an error probability. Inter-regional C-alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C-alpha increase in cranio-caudal direction (with the exception of septum) was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity. Conclusions: This new method utilizing the combination of symbolic dynamics and adaptive power estimation can provide complex evaluation of the dynamics of AF in man. High-density mapping will be required for further evaluation of results.
A flat cover is a collection of flats identifying the non-bases of a matroid. We introduce the notion of cover complexity, the minimal size of such a flat cover, as a measure for the complexity of a matroid, and prese...
详细信息
A flat cover is a collection of flats identifying the non-bases of a matroid. We introduce the notion of cover complexity, the minimal size of such a flat cover, as a measure for the complexity of a matroid, and present bounds on the number of matroids on n elements whose cover complexity is bounded. We apply cover complexity to show that the class of matroids without an N-minor is asymptotically small in case N is one of the sparse paving matroids U-2,U-k, U-3,U-6, P-6, Q(6) or R-6, thus confirming a few special cases of a conjecture due to Mayhew, Newman, Welsh, and Whittle. On the other hand, we show a lower bound on the number of matroids without an M(K-4)-minor which asymptotically matches the best known lower bound on the number of all matroids, due to Knuth. (C) 2014 Elsevier Inc. All rights reserved.
Let G be a graph, and t : E(G)-+ {1, ... , k} be a k-labelling of G, i.e., an assignment of labels from {1, ... , k} to the edges of G. We say that t is irregular if no two distinct vertices of G are incident to the s...
详细信息
Let G be a graph, and t : E(G)-+ {1, ... , k} be a k-labelling of G, i.e., an assignment of labels from {1, ... , k} to the edges of G. We say that t is irregular if no two distinct vertices of G are incident to the same sum of labels. The irregularity strength of G, denoted by s(G), is the smallest k such that irregular k-labellings of G exist. These notions were introduced in the late 1980s as an alternative way to deal with an optimisation problem where one aims at making a graph irregular by multiplying its edges in an optimal way. Since then, the irregularity strength has received a lot of attention, focusing mainly on proving bounds and investigating side aspects and *** this work, we consider the algorithmic complexity of determining the irregularity strength of a given graph. We prove that two close variants of this problem are NP-hard, which we suspect might indicate that the original problem is hard too. Namely, we prove that determining the distant irregularity strength, where only vertices within a certain distance are required to be incident to different sums of labels, and the multiset irregularity strength, where any two distinct vertices are required to be incident to different multisets of labels, are NP-hard problems.(c) 2022 Elsevier B.V. All rights reserved.
We study the decentralized detection problem in a general framework where arbitrary number of quantization levels at the local sensors are allowed, and transmission from the sensors to the fusion center is subject to ...
详细信息
We study the decentralized detection problem in a general framework where arbitrary number of quantization levels at the local sensors are allowed, and transmission from the sensors to the fusion center is subject to both noise and interchannel interference. We treat both Bayesian and Neyman-Pearson approaches to the problem, and develop an iterative descent algorithm to design the optimal quantizers and fusion rule. Some numerical examples for both approaches are also presented.
The paper presents the in-place implementation of the multidimensional radix 2 fast Fourier transform (FFT), along with the corresponding algorithm for data shuffling (bit-reversal) on SIMD hypercube computers. Each p...
详细信息
The paper presents the in-place implementation of the multidimensional radix 2 fast Fourier transform (FFT), along with the corresponding algorithm for data shuffling (bit-reversal) on SIMD hypercube computers. Each processor possesses its own non-shared memory, the number of processors being less than or equal to the number of data. The flexibility of the proposed algorithm is based on the scheme of information storage that has been chosen and in the decomposition/reconfiguration of the hypercube in subhypercubes that allow the parallel processing of multiple one-dimensional FFTs. This parallel FFT algorithm has an optimum performance, since the data redundancy is null and the algorithmic complexity is optimum.
We study bases of the lattice generated by the cycles of an undirected graph, defined as the integer linear combinations of the 0/1-incidence vectors of cycles. We prove structural results for this lattice, including ...
详细信息
We study bases of the lattice generated by the cycles of an undirected graph, defined as the integer linear combinations of the 0/1-incidence vectors of cycles. We prove structural results for this lattice, including explicit formulas for its dimension and determinant, and we present efficient algorithms to construct lattice bases, using only cycles as generators, in quadratic time. By algebraic considerations, we relate these results to the more general setting with coefficients from an arbitrary Abelian group. Our results generalize classical results for the vector space of cycles of a graph over the binary field to the case of an arbitrary field. (C) 2020 Elsevier Inc. All rights reserved.
Sorting by Genome Rearrangements is a classic problem in Computational Biology. Several models have been considered so far, each of them defines how a genome is modeled (for example, permutations when assuming no dupl...
详细信息
Sorting by Genome Rearrangements is a classic problem in Computational Biology. Several models have been considered so far, each of them defines how a genome is modeled (for example, permutations when assuming no duplicated genes, strings if duplicated genes are allowed, and/or use of signs on each element when gene orientation is known), and which rearrangements are allowed. Recently, a new problem, called Sorting by Multi-Cut Rearrangements, was proposed. It uses the k-cut rearrangement which cuts a permutation (or a string) at k >= 2 places and rearranges the generated blocks to obtain a new permutation (or string) of same size. This new rearrangement may model chromoanagenesis, a phenomenon consisting of massive simultaneous rearrangements. Similarly as the Double-Cut-and-Join, this new rearrangement also generalizes several genome rearrangements such as reversals, transpositions, revrevs, transreversals, and block-interchanges. In this paper, we extend a previous work based on unsigned permutations and strings to signed permutations. We show the complexity of this problem for different values of k, and that the approximation algorithm proposed for unsigned permutations with any value of k can be adapted to signed permutations. We also show a 1.5-approximation algorithm for the specific case k = 4, as well as a generic approximation algorithm applicable for any k >= 5, that always reaches constant ratio. The latter makes use of the cycle graph, a well-known structure in genome rearrangements. We implemented and tested the proposed algorithms on simulated data.
This paper suggests an epistemic interpretation of Belnap's branching space-times theory based on Everett's relative state formulation of the measurement operation in quantum mechanics. The informational branc...
详细信息
This paper suggests an epistemic interpretation of Belnap's branching space-times theory based on Everett's relative state formulation of the measurement operation in quantum mechanics. The informational branching models of the universe are evolving structures defined from a partial ordering relation on the set of memory states of the impersonal observer. The totally ordered set of their information contents defines a linear "time" scale to which the decoherent alternative histories of the informational universe can be referred-which is quite necessary for assigning them a probability distribution. The "historical" state of a physical system is represented in an appropriate extended Hilbert space and an algebra of multi-branch operators is developed. An age operator computes the informational depth of historical states and its standard deviation can be used to provide a universal information/energy uncertainty relation. An information operator computes the encoding complexity of historical states, the rate of change of its average value accounting for the process of correlation destruction inherent to the branching dynamics. In the informational branching models of the universe, the asymmetry of phenomena in nature appears as a mere consequence of the subject's activity of measuring, which defines the flow of time-information.
We introduce a special class of Schrodinger type H-operators in l2 as (phi, Hpsi) = SIGMA(n=0)infinity phi(n)* [square-root R(n+1)psi(n+1) + square-root R(n)psi(n-1)], R(n) being a nonnegative real number. H satisfies...
详细信息
We introduce a special class of Schrodinger type H-operators in l2 as (phi, Hpsi) = SIGMA(n=0)infinity phi(n)* [square-root R(n+1)psi(n+1) + square-root R(n)psi(n-1)], R(n) being a nonnegative real number. H satisfies the renormalization equation HD = D(H-2 - lambda), with lambda real, lambda greater-than-or-equal-to 2. D is the decimation operator defined by (phi, Dpsi) = SIGMA(n=0)infinity phi(n)*psi2n. A consequence of the renormalization equation is that the R(n) fulfil the recursion relation R0 = 0, R(2n)R2n-1 = R(n), R2n + R2n+1 = lambda. From the above relations, it can be shown that the R(n) are quasi-periodic functions of their index n. The components of the eigenfunctions of H corresponding to the eigenvalue x are the orthonormalized polynomials P(n)(x) satisfying square-root R(n+1)P(n+1)(x) + square-root R(n)P(n-1)(x) = xP(n)(x). The spectrum of H is the support of the measure associated to the polynomials. In the present case it is a compact perfect set of Lebesgue measure zero (Cantor set). It is therefore purely singular continuous. We are led to study classes of orthogonal polynomials whose three-terms recursive relations are quasi periodic functions of their index. We will present several results, conjectures and open questions which may have relevant physical applications. We study the randomness of the eigenfunctions, and we discuss their algorithmic complexity.
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective ...
详细信息
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering.
暂无评论