We describe basic elements and implementation of the adaptive integral method (AIM): a fast iterative integral-equation solver applicable to large-scale electromagnetic scattering and radiation problems. As compared t...
详细信息
We describe basic elements and implementation of the adaptive integral method (AIM): a fast iterative integral-equation solver applicable to large-scale electromagnetic scattering and radiation problems. As compared to the conventional method of moments, the AIM solver provides (for typical geometries) significantly reduced storage and solution time already for problems involving 2,000 unknowns. This reduction is achieved through a compression of the impedance matrix, split into near-field and far-field components. The near-field component is computed by using the Galerkin method employing a set of N arbitrary basis functions. The far-field matrix elements are calculated by using the Galerkin method as well, with a set of N auxiliary basis functions. The auxiliary basis functions are constructed as superpositions of pointlike current elements located on uniformly spaced Cartesian grid nodes and are required to reproduce, with a prescribed accuracy, the far field generated by the original basis functions. Algebraically, the resulting near-field component of the impedance matrix is sparse, while its far-field component is a product of two sparse matrices and a three-level Toeplitz matrix. These Toeplitz properties are exploited, by using discrete fast Fourier transforms, to carry out matrix-vector multiplications with O(N(3/2)logN) and O(NlogN) serial complexities for surface and volumetric scattering problems, respectively. The corresponding storage requirements are O(N-3/2) and O(N). In the domain-decomposed parallelized implementation of the solver, with the number Np of processors equal to the number of domains, the total memory required in surface problems is reduced to O(N-3/2/N-p(1/2)). The speedup factor in matrix-vector multiplication is equal to the number of processors N-p. We present a detailed analysis of the errors introduced by the use of the auxiliary basis functions in computing far-field impedance matrix elements. We also discuss the algorithm comple
This study presents a field programmable gate array (FPGA) implementation of a simple fault-tolerant control that ensures continuous operation of hysteresis current controlled AC machine drives under faulty current se...
详细信息
This study presents a field programmable gate array (FPGA) implementation of a simple fault-tolerant control that ensures continuous operation of hysteresis current controlled AC machine drives under faulty current sensor. The adopted control requires the use of three current sensors and is available for three-phase isolated neutral systems. The third current sensor allows the faulty current sensor detection and isolation based on analytical redundancy that provides residuals with well-defined thresholds. The control is reconfigured in case of faulty current measurement and operation continuity is performed using the remaining two healthy sensors. The main interest of using FPGAs to implement such controllers is the very important reduction of execution time delay in spite of algorithm complexity. As a result, a high sampling frequency can be used and the residual thresholds can be accurately defined even for the experimental set-up. Numerous experimental results are given to illustrate the efficiency of FPGA-based solutions to achieve efficient and reliable fault-tolerant hysteresis current control of AC machine drives.
Currently, Bayesian Networks (BNs) have become one of the most complete, self-sustained and coherent formalisms used for knowledge acquisition, representation and application through computer systems. However, learnin...
详细信息
Currently, Bayesian Networks (BNs) have become one of the most complete, self-sustained and coherent formalisms used for knowledge acquisition, representation and application through computer systems. However, learning of BNs structures from data has been shown to be an NP-hard problem. It has turned out to be one of the most exciting challenges in machine learning. In this context, the present work's major objective lies in setting up a further solution conceived to be a remedy for the intricate algorithmic complexity imposed during the learning of BN-structure with a massively-huge data backlog. (C) 2015 Elsevier B.V. All rights reserved.
To represent concurrent behaviours one can use concepts originating from language theory, including traces and comtraces. Traces can express notions such as concurrency and causality, whereas comtraces can also captur...
详细信息
To represent concurrent behaviours one can use concepts originating from language theory, including traces and comtraces. Traces can express notions such as concurrency and causality, whereas comtraces can also capture weak causality and simultaneity. This paper is concerned with the development of efficient data structures and algorithms for manipulating comtraces. We introduce and investigate folded Hasse diagrams of comtraces which generalise Hasse diagrams defined for partial orders and traces. We also develop an efficient on-line algorithm for deriving Hasse diagrams from language theoretic representations of comtraces. Finally, we briefly discuss how folded Hasse diagrams could be used to implement efficiently some basic operations on comtraces. (C) 2013 Elsevier B.V. All rights reserved.
We analyze the problem of computing the minimum number er(C) of internal simplexes that need to be removed from a simplicial 2-complex C so that the remaining complex can be nulled by deleting a sequence of external s...
详细信息
We analyze the problem of computing the minimum number er(C) of internal simplexes that need to be removed from a simplicial 2-complex C so that the remaining complex can be nulled by deleting a sequence of external simplexes. We show that the decision version of this problem is NP-complete even when C is embeddable in 3-dimensional space. Since the Betti numbers of C can be computed in polynomial time, this implies that there is no polynomial time computable formula for er(C) in terms of the Betti numbers of the complex, unless P = NP. The problem can be solved in linear time for 1-complexes (graphs). Our reduction can also be used to show that the corresponding approximation problem is at least as difficult as the one for the minimum cardinality vertex cover, and what is worse, as difficult as the minimum set cover problem. Thus simple heuristics may generate solutions that are arbitrarily far from optimal.
We study a decision problem, that emerges from the area of spatial reasoning. This decision problem concerns the description of polylines in the plane by means of their double-cross matrix. In such a matrix, the relat...
详细信息
We study a decision problem, that emerges from the area of spatial reasoning. This decision problem concerns the description of polylines in the plane by means of their double-cross matrix. In such a matrix, the relative position of each pair of line segments in a polyline is expressed by means of a 4-tuple over {-, 0,+}. However, not any such matrix of 4-tuples is the double-cross matrix of a polyline. This gives rise to the decision problem: given a matrix of such 4-tuples, decide whether it is the double-cross matrix of a polyline. This problem is decidable, but it is NP-hard. In this paper, we give polynomial-time algorithms for the cases where consecutive line segments in a polyline make angles that are multiples of 90 degrees or 45 degrees and for the case where, apart from an input matrix, the successive angles of a polyline are also given as input. (C) 2016 Elsevier Inc. All rights reserved.
A classical theorem of Gallai states that in every graph that is critical for k-colorings, the vertices of degree k-1 induce a tree-like graph whose blocks are either complete graphs or cycles of odd length. We provid...
详细信息
A classical theorem of Gallai states that in every graph that is critical for k-colorings, the vertices of degree k-1 induce a tree-like graph whose blocks are either complete graphs or cycles of odd length. We provide a generalization to colorings and list colorings of digraphs, where some new phenomena arise. In particular, the problem of list coloring digraphs with the lists at each vertex v having min{d(+)(upsilon), d(-)(upsilon)} colors turns out to be NP-hard.
For a mixed hypergraph H = (X, W, D), where L and D are set systems over the vertex set X, a coloring is a partition of X into color classes' such that every C is an element of L meets some class in more than one ...
详细信息
For a mixed hypergraph H = (X, W, D), where L and D are set systems over the vertex set X, a coloring is a partition of X into color classes' such that every C is an element of L meets some class in more than one vertex, and every D is an element of D has a nonempty intersection with at least two classes. A vertex-order x(1), x(2),..., x(n) on X (n = vertical bar X vertical bar) is uniquely colorable if the subhypergraph induced by {x(j) : 1 <= j <= i} has precisely one coloring, for each i (1 <= i <= n). We prove that it is NP-complete to decide whether a mixed hypergraph admits a uniquely colorable vertex-order, even if the input is restricted to have just one coloring. On the other hand, via a characterization theorem it can be decided in linear time whether a given color-sequence belongs to a mixed hypergraph in which the uniquely colorable vertex-order is unique. (c) 2007 Elsevier B.V. All rights reserved.
We describe an optimized algorithm, which is faster and more accurate compared to previously described algorithms, for computing the statistical mechanics of denaturation of nucleic acid sequences according to the cla...
详细信息
We describe an optimized algorithm, which is faster and more accurate compared to previously described algorithms, for computing the statistical mechanics of denaturation of nucleic acid sequences according to the classical Poland-Scheraga type of model. Nearest neighbor thermodynamics has been included in a complete and general way, by rigorously treating nearest neighbor interactions, helix end interactions, and isolated base-pairs. This avoids the simplifications of previous approaches and achieves full generality and controllability with respect to thermodynamic modeling. The algorithm computes subchain partition functions by recursion, from which various quantitative aspects of the melting process are easily derived, for example the base-pairing probability profiles. The algorithm represents an optimization with respect to algorithmic complexity of the partition function algorithm of Yeramian et al. (Biopolymers 1990, 30, 481-497): we reduce the computation time for a base-pairing probability profile from O(N-2) to O(N), where N is the sequence length. This speed-up comes in addition to the speed-up due to a multiexponential approximation of the loop entropy factor as introduced by Fixman and Freire(22) and applied by Yeramian et al.(25) The speed-up, however, is independent of the multiexponential approximation and reduces time from O(N-3) to O(N-2) in the exact case. A method for representing very large numbers is described, which avoids numerical overflow in the partition Junctions for genomic length sequences. In addition to calculating the standard base-pairing probability profiles, we propose to use the algorithm to calculate various other probabilities (loops, helices, tails) for a more direct view of the melting regions and their positions and sizes. This can provide a better understanding of the physics of denaturation and the biology of genomes. (C) 2003 Wiley Periodicals, Inc.
We introduce the concept of accumulative depth of a node in a computer network. By utilising this concept as a criterion in the assignment of weights to the nodes of the network, we develop an efficient algorithm (NOB...
详细信息
We introduce the concept of accumulative depth of a node in a computer network. By utilising this concept as a criterion in the assignment of weights to the nodes of the network, we develop an efficient algorithm (NOBF) to calculate a near-optimal broadcasting figure (a broadcast with a near-minimal transmission delay) in arbitrary point-to-point computer networks. Static and dynamic assignments of weights have been considered. We analyse the efficiency of the algorithm based on accumulative depth as a function of the transmission delay and the algorithmic complexity.
暂无评论