The authors propose a novel model called concatenated recursive compressor-decompressor network (CRCDNet) for contrast-enhanced super-resolution. The characteristics of authors' model can be summarised as follows....
详细信息
The authors propose a novel model called concatenated recursive compressor-decompressor network (CRCDNet) for contrast-enhanced super-resolution. The characteristics of authors' model can be summarised as follows. First, a compression-decompression process reduces the computational complexity compared with the general fully convolutional model. Second, an internal/external skip-connection is used to preserve information of the preceding layers. Finally, by employing a recursive module, authors' model has a small number of parameters, yet is a deep and robust network. The authors apply authors' proposed network to license plate images. As a real application, license plates can provide important evidence for investigation of crimes and for security, but it is very difficult to collect the vast amounts of license plates required for analysis based on a data-driven approach. To solve this problem, the authors generated virtual datasets to train authors' model, while analysing the performance with real license plate datasets. Authors' method achieves better performance than the state-of-the-art models on license plate images.
It is known that any chordal graph can be uniquely decomposed into simplicial components. Based on this fact, it is shown that for a given chordal graph, its automorphism group can be computed in O((c! - n)(O(1))) tim...
详细信息
It is known that any chordal graph can be uniquely decomposed into simplicial components. Based on this fact, it is shown that for a given chordal graph, its automorphism group can be computed in O((c! - n)(O(1))) time, where c denotes the maximum size of simplicial components and n denotes the number of nodes. It is also shown that isomorphism of those chordal graphs can be decided within the same time bound. From the viewpoint of polynomial-time computability, our result strictly strengthens the previous ones respecting the clique number.
The development of cyber-physical power systems raises concerns about the data quality issue of phasor measurement units (PMUs). Low signal-to-noise ratios (SNRs) and data losses caused by malicious electromagnetic in...
详细信息
The development of cyber-physical power systems raises concerns about the data quality issue of phasor measurement units (PMUs). Low signal-to-noise ratios (SNRs) and data losses caused by malicious electromagnetic interference, false data injections, and equipment malfunctioning may jeopardize the data integrity and availability necessary for power system monitoring, protection, and control. To ensure grid resiliency, this paper proposes a robust fast PMU measurement recovery (RFMR) algorithm based on improved singular spectrum analysis (SSA) of Hankel structures. It utilizes single or multiple channels of PMU time-series to restore the problematic phasor measurements with low-SNR noises and data losses. Additionally, the traditional singular value decomposition (SVD) and Tucker decomposition (TD) in RFMR are replaced by randomized SVD (RSVD) and sequential TD (STD) to reduce the computational complexity in single-channel and multi-channel RFMR, respectively. Numerical case studies demonstrate that the proposed algorithm can recover the noise-contaminated measurements with higher accuracy than existing methods, such as matrix/tensor decomposition approaches and robust principal component analysis (RPCA), and effectively complement the missing data with the observed measurements corrupted by low SNRs. Moreover, the latency margins of various power system synchrophasor application scenarios can be satisfied with the reduced computational complexity.
We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following...
详细信息
We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log2 n) space. All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a 1-tape alternating Turing machine with only a constant factor increase in time and space.
This paper presents results of a study of the fundamentals of sorting. Emphasis is placed on understanding sorting and on minimizing the time required to sort with electronic equipment of reasonable cost. Sorting is v...
详细信息
This paper presents results of a study of the fundamentals of sorting. Emphasis is placed on understanding sorting and on minimizing the time required to sort with electronic equipment of reasonable cost. Sorting is viewed as a combination of information gathering and item moving activities. Shannon"s communication theory measure of information is applied to assess the difficulty of various sorting problems. Bounds on the number of comparisons required to sort are developed, and optimal or near-optimal sorting schemes are described and investigated. Three abstract sorting models based on cyclic, linear, and randomaccess memories are defined. Optimal or near-optimal sorting methods are developed for the models and their parallel-register extensions. A brief review of the origin of the work and some of its hypotheses is also presented.
For several computational problems in homotopy theory, we obtain algorithms with running time polynomial in the input size. In particular, for every fixed k >= 2, there is a polynomial-time algorithm that, for a 1-...
详细信息
For several computational problems in homotopy theory, we obtain algorithms with running time polynomial in the input size. In particular, for every fixed k >= 2, there is a polynomial-time algorithm that, for a 1-connected topological space X given as a finite simplicial complex, or more generally, as a simplicial set with polynomial-time homology, computes the kth homotopy group pi(k)(X), as well as the first k stages of a Postnikov system of X. Combined with results of an earlier paper, this yields a polynomial-time computation of [X, Y], i.e., all homotopy classes of continuous mappings X -> Y, under the assumption that Y is (k - 1)-connected and dim X <= 2k - 2. We also obtain a polynomial-time solution of the extension problem, where the input consists of finite simplicial complexes X -> Y, where Y is (k - 1)-connected and dim X <= 2k - 1, plus a subspace A subset of X and a (simplicial) map f : A -> Y, and the question is the extendability of f to all of X. The algorithms are based on the notion of a simplicial set with polynomial-time homology, which is an enhancement of the notion of a simplicial set with effective homology developed earlier by Sergeraert and his coworkers. Our polynomial-time algorithms are obtained by showing that simplicial sets with polynomial-time homology are closed under various operations, most notably Cartesian products, twisted Cartesian products, and classifying space. One of the key components is also polynomial-time homology for the Eilenberg-MacLane space K(Z, 1), provided in another recent paper by Krcal, Matousek, and Sergeraert.
The mismatch between the clutter and noise power of the prior knowledge and true interference covariance matrix degrades the performance of fast maximum likelihood with assumed clutter covariance (FMLACC) algorithm si...
详细信息
The mismatch between the clutter and noise power of the prior knowledge and true interference covariance matrix degrades the performance of fast maximum likelihood with assumed clutter covariance (FMLACC) algorithm significantly. By introducing a scale parameter to flexibly adjust the prior power, the authors propose an algorithm which is more robust to the power mismatch than FMLACC algorithm. They also develop a more straightforward method to derive the maximum likelihood covariance matrix estimator under this scaled knowledge constraint. Moreover, they study the problem of automatically determining the scale parameter. The authors provide two parameter selection methods, the first of which is based on estimating the minimum eigenvalue of the prewhitened sample covariance matrix, and the second is based on cross validation. To reduce the computational complexity, they also develop fast implementations for the parameter selection based on cross validation. Numerical simulations demonstrate the performance enhancement of the proposed algorithm compared with FMLACC algorithm in cases of mismatched prior knowledge.
We present a new hardness of approximation result for the Shortest Vector Problem in l(P) norm (denoted by SVPp). Assuming NP not subset of ZPP, we show that for every epsilon > 0, there is a constant p(epsilon) su...
详细信息
We present a new hardness of approximation result for the Shortest Vector Problem in l(P) norm (denoted by SVPp). Assuming NP not subset of ZPP, we show that for every epsilon > 0, there is a constant p(epsilon) such that for all integers p >= p(epsilon), the problem SVPp has no polynomial time approximation algorithm with approximation ratio p(1-epsilon). (c) 2005 Elsevier Inc. All rights reserved.
In the data-accumulating paradigm, inputs arrive continuously in real time, and the computation terminates when all the already received data are processed before another datum arrives. Previous research states that a...
详细信息
In the data-accumulating paradigm, inputs arrive continuously in real time, and the computation terminates when all the already received data are processed before another datum arrives. Previous research states that a constant upper bound on the running time of a successful algorithm within this paradigm exists only for particular forms of the data arrival law. This contradicts our recent conjecture that those problems that are solvable in real time are included in the class of logarithmic space-bounded computations. However, we prove that such an upper bound does exist in fact in both the parallel and sequential cases and for any polynomial arrival law, thus strengthening the mentioned conjecture. Then, we analyze an example of a noncontinuous data arrival law. We find similar properties for the sorting algorithm under such a law, namely the existence of an upper bound on the running time, suggesting that such properties do not depend on the form of the arrival law. (C) 2003 Elsevier Science B.V. All rights reserved.
The conventional capacitor voltage balancing strategy of the modular multi-level converter (MMC) requires real-time sorting in each control cycle, which brings huge computational load to the controller. Therefore, a f...
详细信息
The conventional capacitor voltage balancing strategy of the modular multi-level converter (MMC) requires real-time sorting in each control cycle, which brings huge computational load to the controller. Therefore, a fast capacitor voltage balancing method for MMC is proposed. Based on the conventional strategy, this method combines the operating rules of the sub-module (SM), considering the voltages of SMs and historical information on charging and discharging of the capacitor. It selects the appropriate insert object and insertion process according to the relationship between the number of the switched-on and switched-off SMs in the last control cycle, and quickly forms the ordered sequence of SMs in the control cycle. The simulation results show that the proposed strategy can achieve voltage balancing with low computational complexity, significantly improve the system simulation speed, and effectively reduce the controller resources occupied by the balancing strategy.
暂无评论