A coprime array has a larger array aperture as well as increased degrees-of-freedom (DOFs), compared with a uniform linear array with the same number of physical sensors. Therefore, in a practical wireless communicati...
详细信息
A coprime array has a larger array aperture as well as increased degrees-of-freedom (DOFs), compared with a uniform linear array with the same number of physical sensors. Therefore, in a practical wireless communication system, it is capable to provide desirable performance with a low-computational complexity. In this study, the authors focus on the problem of efficient direction-of-arrival (DOA) estimation, where a coprime array is incorporated with the idea of compressive sensing. Specifically, the authors first generate a random compressive sensing kernel to compress the received signals of coprime array to lower-dimensional measurements, which can be viewed as a sketch of the original received signals. The compressed measurements are subsequently utilised to perform high-resolution DOA estimation, where the large array aperture of the coprime array is maintained. Moreover, the authors also utilise the derived equivalent virtual array signal of the compressed measurements for DOA estimation, where the superiority of coprime array in achieving a higher number of DOFs can be retained. Theoretical analyses and simulation results verify the effectiveness of the proposed methods in terms of computational complexity, resolution, and the number of DOFs.
Ordered Binary Decision Diagrams (OBDDs) are graph-based representations of Boolean functions which are widely used because of their good properties. In this paper, we introduce nondeterministic OBDDs (NOBDDs) and the...
详细信息
Ordered Binary Decision Diagrams (OBDDs) are graph-based representations of Boolean functions which are widely used because of their good properties. In this paper, we introduce nondeterministic OBDDs (NOBDDs) and their restricted forms, and evaluate their expressive power. In some applications of OBDDs, canonicity, which is one of the good properties of OBDDs, is not necessary. In such cases, we can reduce the required amount of storage by using OBDDs in some non-canonical form. A class of NOBDDs can be used as a non-canonical form of OBDDs. In this paper, we focus on two particular methods which can be regarded as using restricted forms of NOBDDs. Our aim is to show how the size of OBDDs can be reduced in such forms from theoretical point of view. Firstly, we consider a method to solve satisfiability problem of combinational circuits using the structure of circuits as a key to reduce the NOBDD size. We show that the NOBDD size is related to the cutwidth of circuits. Secondly, we analyze methods that use OBDDs to represent Boolean functions as sets of product terms. We show that the class of functions treated feasibly in this representation strictly contains that in OBDDs and contained by that in NOBDDs.
For delta is an element of (0, 1) and k, n is an element of N, we study the task of transforming a hard function f : {0, 1}(n) -> {0, 1}, with which any small circuit disagrees on (1 - delta)/2 fraction of the inpu...
详细信息
For delta is an element of (0, 1) and k, n is an element of N, we study the task of transforming a hard function f : {0, 1}(n) -> {0, 1}, with which any small circuit disagrees on (1 - delta)/2 fraction of the input, into a harder function f', with which any small circuit disagrees on (1 - delta(k))/2 fraction of the input. First, we show that such hardness amplification, when carried out in some black-box way, must require a high complexity. In particular, it cannot be realized by a circuit of depth d and size 2(o(k1/d)) or by a nondeterministic circuit of size o(k/log k) (and arbitrary depth) for any delta is an element of (0, 1). This extends the result of Viola, which only works when (1 - delta)/2 is small enough. Furthermore, we show that even without any restriction on the complexity of the amplification procedure, such a black-box hardness amplification must be inherently nonuniform in the following sense. To guarantee the hardness of the resulting function f', even against uniform machines, one has to start with a function f, which is hard against nonuniform algorithms with Omega(k log(1/delta)) bits of advice. This extends the result of Trevisan and Vadhan, which only addresses the case with (1 - delta)/2 = 2(-n). Finally, we derive similar lower bounds for any black-box construction of a pseudorandom generator (PRG) from a hard function. To prove our results, we link the task of hardness amplifications and PRG constructions, respectively, to some type of error-reduction codes, and then we establish lower bounds for such codes, which we hope could find interest in both coding theory and complexity theory.
The mismatch between the clutter and noise power of the prior knowledge and true interference covariance matrix degrades the performance of fast maximum likelihood with assumed clutter covariance (FMLACC) algorithm si...
详细信息
The mismatch between the clutter and noise power of the prior knowledge and true interference covariance matrix degrades the performance of fast maximum likelihood with assumed clutter covariance (FMLACC) algorithm significantly. By introducing a scale parameter to flexibly adjust the prior power, the authors propose an algorithm which is more robust to the power mismatch than FMLACC algorithm. They also develop a more straightforward method to derive the maximum likelihood covariance matrix estimator under this scaled knowledge constraint. Moreover, they study the problem of automatically determining the scale parameter. The authors provide two parameter selection methods, the first of which is based on estimating the minimum eigenvalue of the prewhitened sample covariance matrix, and the second is based on cross validation. To reduce the computational complexity, they also develop fast implementations for the parameter selection based on cross validation. Numerical simulations demonstrate the performance enhancement of the proposed algorithm compared with FMLACC algorithm in cases of mismatched prior knowledge.
This paper analyzes the market-clearing formulation with stochastic security developed in its companion paper through two case studies solved using mixed-integer linear programming techniques. The generation and reser...
详细信息
This paper analyzes the market-clearing formulation with stochastic security developed in its companion paper through two case studies solved using mixed-integer linear programming techniques. The generation and reserve schedules as well as the nodal prices of energy and security are assessed under various conditions such as a) line flow limits, b) when nonspinning reserve is excluded from the formulation, c) demand-side valuation of energy not served, d) generator ramping limits, and e) the set of pre-selected contingencies.
We present a new hardness of approximation result for the Shortest Vector Problem in l(P) norm (denoted by SVPp). Assuming NP not subset of ZPP, we show that for every epsilon > 0, there is a constant p(epsilon) su...
详细信息
We present a new hardness of approximation result for the Shortest Vector Problem in l(P) norm (denoted by SVPp). Assuming NP not subset of ZPP, we show that for every epsilon > 0, there is a constant p(epsilon) such that for all integers p >= p(epsilon), the problem SVPp has no polynomial time approximation algorithm with approximation ratio p(1-epsilon). (c) 2005 Elsevier Inc. All rights reserved.
The author deduces some new probabilistic estimates on the distances between the zeros of a polynomial p(chi) by using some properties of the discriminant of p(chi) and applies these estimates to improve the fastest d...
详细信息
The author deduces some new probabilistic estimates on the distances between the zeros of a polynomial p(chi) by using some properties of the discriminant of p(chi) and applies these estimates to improve the fastest deterministic algorithm for approximating polynomial factorization over the complex field. Namely given a natural n, positive epsilon, such that log(1/epsilon) = O(n logn), and the complex coefficients of a polynomial p(chi) = Sigma(i=0)(n)p(i) chi(i), such that p(n) not equal 0, Sigma(i)\pi\less than or equal to 1, a factorization of p(chi) (within the error norm epsilon) is computed as a product of factors of degrees at most n/2, by using O(log(2) n) time and n(3) processors under the PRAM arithmetic model of parallel computing or by using O(n(2)log(2) n) arithmetic operations. The algorithm is randomized, of Las Vegas type, allowing a failure with a probability at most delta, for any positive delta < 1 such that log(1/delta) = O(log n). Except for a narrow class of polynomials p(chi), these results can be also obtained for a such that log(1/epsilon) = O(n(2) log n).
The conventional capacitor voltage balancing strategy of the modular multi-level converter (MMC) requires real-time sorting in each control cycle, which brings huge computational load to the controller. Therefore, a f...
详细信息
The conventional capacitor voltage balancing strategy of the modular multi-level converter (MMC) requires real-time sorting in each control cycle, which brings huge computational load to the controller. Therefore, a fast capacitor voltage balancing method for MMC is proposed. Based on the conventional strategy, this method combines the operating rules of the sub-module (SM), considering the voltages of SMs and historical information on charging and discharging of the capacitor. It selects the appropriate insert object and insertion process according to the relationship between the number of the switched-on and switched-off SMs in the last control cycle, and quickly forms the ordered sequence of SMs in the control cycle. The simulation results show that the proposed strategy can achieve voltage balancing with low computational complexity, significantly improve the system simulation speed, and effectively reduce the controller resources occupied by the balancing strategy.
One of the most common ways in which results are displayed by an information retrieval system is in the form of a list, in which the most relevant results appear in the first positions. Todays large screens, however, ...
详细信息
One of the most common ways in which results are displayed by an information retrieval system is in the form of a list, in which the most relevant results appear in the first positions. Todays large screens, however, allow one to create more complex displays of results, especially in cases such as image retrieval, in which each unit returned is fairly compact. For these layouts the simple list model is no longer valid, since the relations between the slots in which the results are placed do not form a sequence, that is, the relation among them is no longer that of a total order. In this paper we model these layouts as partial orders and show that a "stalwart display" property (a layout in which items' relevance is unambiguously conveyed by their display position) can be obtained only in the case of lists. For the other layouts, we define two classes of representation functions: "safe" functions (which display results without adding spurious structure) and "rich" functions (which do not drop any structure from the result set), as well as an algorithm to optimally display fully ordered result sets in arbitrary display layouts.
The quantum Monte Carlo algorithm can provide significant speedup compared to its classical counterpart. So far, most reported works have utilized Grover's state preparation algorithm. However, this algorithm reli...
详细信息
The quantum Monte Carlo algorithm can provide significant speedup compared to its classical counterpart. So far, most reported works have utilized Grover's state preparation algorithm. However, this algorithm relies on costly controlled Y rotations to apply the correct amplitudes onto the superposition states. Recently, a comparison-based state preparation method was proposed to reduce computational complexity by avoiding rotation operations. One critical aspect of this method is the generation of the comparison threshold associated with the amplitude of the quantum superposition states. The direct computation of the comparison threshold is often very costly. An alternative is to estimate the threshold with a Taylor approximation. However, Taylor approximations do not work well with heavy-tailed distribution functions such as the Cauchy distribution, which is widely used in applications such as financial modeling. Therefore, a new state preparation method needs to be developed. In this study, an efficient comparison-based state preparation method is proposed for the heavy-tailed Cauchy distribution. Instead of a single Taylor approximation for the entire function domain, this study uses quantum piecewise arithmetic to increase accuracy and reduce computational cost. The proposed piecewise function is in the simplest form to estimate the comparison threshold associated with the amplitudes. Numerical analysis shows that the number of required subdomains increases linearly as the maximum tolerated approximation error decreases exponentially. 197 subdomains are required to keep the error below 18192 of the maximum amplitude. Quantum parallelism ensures that the computational complexity of estimating the amplitudes is independent from the number of subdomains. (c) 2021 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://***/licenses/by/4.0/).
暂无评论