This paper presents results of a study of the fundamentals of sorting. Emphasis is placed on understanding sorting and on minimizing the time required to sort with electronic equipment of reasonable cost. Sorting is v...
详细信息
This paper presents results of a study of the fundamentals of sorting. Emphasis is placed on understanding sorting and on minimizing the time required to sort with electronic equipment of reasonable cost. Sorting is viewed as a combination of information gathering and item moving activities. Shannon"s communication theory measure of information is applied to assess the difficulty of various sorting problems. Bounds on the number of comparisons required to sort are developed, and optimal or near-optimal sorting schemes are described and investigated. Three abstract sorting models based on cyclic, linear, and randomaccess memories are defined. Optimal or near-optimal sorting methods are developed for the models and their parallel-register extensions. A brief review of the origin of the work and some of its hypotheses is also presented.
For several computational problems in homotopy theory, we obtain algorithms with running time polynomial in the input size. In particular, for every fixed k >= 2, there is a polynomial-time algorithm that, for a 1-...
详细信息
For several computational problems in homotopy theory, we obtain algorithms with running time polynomial in the input size. In particular, for every fixed k >= 2, there is a polynomial-time algorithm that, for a 1-connected topological space X given as a finite simplicial complex, or more generally, as a simplicial set with polynomial-time homology, computes the kth homotopy group pi(k)(X), as well as the first k stages of a Postnikov system of X. Combined with results of an earlier paper, this yields a polynomial-time computation of [X, Y], i.e., all homotopy classes of continuous mappings X -> Y, under the assumption that Y is (k - 1)-connected and dim X <= 2k - 2. We also obtain a polynomial-time solution of the extension problem, where the input consists of finite simplicial complexes X -> Y, where Y is (k - 1)-connected and dim X <= 2k - 1, plus a subspace A subset of X and a (simplicial) map f : A -> Y, and the question is the extendability of f to all of X. The algorithms are based on the notion of a simplicial set with polynomial-time homology, which is an enhancement of the notion of a simplicial set with effective homology developed earlier by Sergeraert and his coworkers. Our polynomial-time algorithms are obtained by showing that simplicial sets with polynomial-time homology are closed under various operations, most notably Cartesian products, twisted Cartesian products, and classifying space. One of the key components is also polynomial-time homology for the Eilenberg-MacLane space K(Z, 1), provided in another recent paper by Krcal, Matousek, and Sergeraert.
A version of the Arimoto-Blahut algorithm for continuous channels involves evaluating integrals over an entire input space and thus is not tractable. Two generalized discrete versions of the Arimoto-Blahut algorithm a...
详细信息
A version of the Arimoto-Blahut algorithm for continuous channels involves evaluating integrals over an entire input space and thus is not tractable. Two generalized discrete versions of the Arimoto-Blahut algorithm are presented for this purpose. Instead of calculating integrals, both algorithms require only the computation of a sequence of finite sums. This significantly reduces numerical computational complexity
Synthesis of sparse planar arrays can effectively reduce hardware costs and computational complexity in phased array 3-D imaging sonar systems. Traditional stochastic methods, such as simulated annealing, require mult...
详细信息
Synthesis of sparse planar arrays can effectively reduce hardware costs and computational complexity in phased array 3-D imaging sonar systems. Traditional stochastic methods, such as simulated annealing, require multiple experiments and parameter adjustments to obtain optimal results. Methods based on compressed sensing (CS) can overcome this defect. However, when applied to large arrays, CS methods require vast computational complexity and may not obtain optimal sparse results because of violating the restricted isometry property. To make CS methods more practical, a distributed convex optimization CS method is proposed here for the sparse planar array synthesis in 3-D imaging sonar systems. This method is based on the CS theory, solving the minimum number of active elements under certain beam pattern constraints using the iterative reweighted l(1)-norm minimization algorithm. Then, a multistage distributed framework is proposed to decompose the array into multistage subarrays, and the array synthesis is performed sequentially for each stage subarray to reduce computational complexity and obtain higher sparsity rates. Some applications of sparse planar array synthesis are employed to evaluate the efficiency of the proposed method.
The authors propose a novel model called concatenated recursive compressor-decompressor network (CRCDNet) for contrast-enhanced super-resolution. The characteristics of authors' model can be summarised as follows....
详细信息
The authors propose a novel model called concatenated recursive compressor-decompressor network (CRCDNet) for contrast-enhanced super-resolution. The characteristics of authors' model can be summarised as follows. First, a compression-decompression process reduces the computational complexity compared with the general fully convolutional model. Second, an internal/external skip-connection is used to preserve information of the preceding layers. Finally, by employing a recursive module, authors' model has a small number of parameters, yet is a deep and robust network. The authors apply authors' proposed network to license plate images. As a real application, license plates can provide important evidence for investigation of crimes and for security, but it is very difficult to collect the vast amounts of license plates required for analysis based on a data-driven approach. To solve this problem, the authors generated virtual datasets to train authors' model, while analysing the performance with real license plate datasets. Authors' method achieves better performance than the state-of-the-art models on license plate images.
We study fairness in a multicast network. We assume that different receivers of the same session can receive information at different rates. We study fair allocation of utilities, where utility of a bandwidth is an ar...
详细信息
We study fairness in a multicast network. We assume that different receivers of the same session can receive information at different rates. We study fair allocation of utilities, where utility of a bandwidth is an arbitrary function of the bandwidth. The utility function is not strictly increasing, nor continuous in general. We discuss fairness issues in this general context. Fair allocation of utilities can be modeled as a nonlinear optimization problem. However, nonlinear optimization techniques do not terminate in a finite number of iterations in general. We present an algorithm for computing a fair utility allocation. Using specific fairness properties, we show that this algorithm attains global convergence and yields a fair allocation in polynomial number of iterations.
The development of cyber-physical power systems raises concerns about the data quality issue of phasor measurement units (PMUs). Low signal-to-noise ratios (SNRs) and data losses caused by malicious electromagnetic in...
详细信息
The development of cyber-physical power systems raises concerns about the data quality issue of phasor measurement units (PMUs). Low signal-to-noise ratios (SNRs) and data losses caused by malicious electromagnetic interference, false data injections, and equipment malfunctioning may jeopardize the data integrity and availability necessary for power system monitoring, protection, and control. To ensure grid resiliency, this paper proposes a robust fast PMU measurement recovery (RFMR) algorithm based on improved singular spectrum analysis (SSA) of Hankel structures. It utilizes single or multiple channels of PMU time-series to restore the problematic phasor measurements with low-SNR noises and data losses. Additionally, the traditional singular value decomposition (SVD) and Tucker decomposition (TD) in RFMR are replaced by randomized SVD (RSVD) and sequential TD (STD) to reduce the computational complexity in single-channel and multi-channel RFMR, respectively. Numerical case studies demonstrate that the proposed algorithm can recover the noise-contaminated measurements with higher accuracy than existing methods, such as matrix/tensor decomposition approaches and robust principal component analysis (RPCA), and effectively complement the missing data with the observed measurements corrupted by low SNRs. Moreover, the latency margins of various power system synchrophasor application scenarios can be satisfied with the reduced computational complexity.
We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following...
详细信息
We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log2 n) space. All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a 1-tape alternating Turing machine with only a constant factor increase in time and space.
Although zero-forcing ( ZF) detection is well-known for its low computational complexity in multiple-input multiple-output ( MIMO) communication systems, it suffers from significantly poor performance. The sphere deco...
详细信息
Although zero-forcing ( ZF) detection is well-known for its low computational complexity in multiple-input multiple-output ( MIMO) communication systems, it suffers from significantly poor performance. The sphere decoder ( SD) method, on the other hand, achieves the maximum likelihood ( ML) performance yet imposes a high computational complexity. We propose a low-complexity detection scheme, concatenated with the SD method, which verifies the reliability of the ZF equalized observations via some predefined regions and thresholds obtained by the channel realization. We design the threshold analytically, such that the method achieves the ML performance. With the designed threshold, we prove that the method achieves the ML performance and the ZF computational complexity at the same time with probability one, at high signal-to-noise ratio ( SNR). The theoretical analysis is corroborated with numerical simulations. The simulation results also show that the proposed method achieves the ML performance very rapidly as the SNR increases.
Memorization is a technique which allows to speed up exponential recursive algorithms at the cost of an exponential space complexity. This technique already leads to the currently fastest algorithm for fixed-parameter...
详细信息
Memorization is a technique which allows to speed up exponential recursive algorithms at the cost of an exponential space complexity. This technique already leads to the currently fastest algorithm for fixed-parameter vertex cover, whose time complexity is O(1.2832(k)k(1.5) + kn), where it is the number of nodes and k is the size of the vertex cover. Via a refined use of memorization, we obtain an C(1.2759(k)k(1.5) + kn) algorithm for the same problem. We moreover Show how to further reduce the complexity to O(1.2745(k)k(4) + kn). 2004 Elsevier B.V. All rights reserved.
暂无评论