Surprising performance has been achieved in style transfer since deep learning was introduced to it. However, the existing state-of-the-art (SOTA) algorithms either suffer from quality issues or high computational com...
详细信息
Surprising performance has been achieved in style transfer since deep learning was introduced to it. However, the existing state-of-the-art (SOTA) algorithms either suffer from quality issues or high computational complexity. The quality issues include shape retention and the adequacy of style migration, and the computational complexity is reflected in the network complexity and additional updates when the style changes. To deal with the above problems, we propose a novel low computational complexity arbitrary style transfer algorithm (LCCStyle) that mainly consists of a transformation feature module (TFM) and learning transformation module (LTM). The TFM is responsible for transforming the content feature map into the stylized feature map without impact on the integrity of content information, which contributes to good shape retention and full style migration. In addition, to avoid additional updates when the style changes, we propose a new training mechanism for arbitrary style transfer to directly generate the parameters of the TFM by a hyper-network. However, the widely used hyper-networks are composed of fully connected layers, which cause a large number of parameters. Hence, we designed a hyper-network (LTM) consisting of one-dimensional convolution to adapt to the characteristics of the Gram matrix of the style feature map, contributing to a small model size and having no impact on quality. Quantitative comparison and user study show that LCCStyle achieves high performance both on the adequacy of style migration and shape retention. Furthermore, compared with the SOTAs, the size of the proposed model is reduced by a large margin of nearly 51.4%$\sim$99.6%. When the input is 512x512 pixels, the processing speeds in the cases of unchanged style and constantly changing style are increased by at least 135% and 227%, respectively. On an Nvidia TITAN RTX GPU, LCCStyle reaches 60fps for 720p video and takes only 1 s to process 8 K images. https://***/HuangYuj
Finite automata are probably best known for being equivalent to right-linear context-free grammars and, thus, for capturing the lowest level of the Chomsky-hierarchy, the family of regular languages. Over the last hal...
详细信息
Finite automata are probably best known for being equivalent to right-linear context-free grammars and, thus, for capturing the lowest level of the Chomsky-hierarchy, the family of regular languages. Over the last half century, a vast literature documenting the importance of deterministic, nondeterministic, and alternating finite automata as an enormously valuable concept has been developed. In the present paper, we tour a fragment of this literature. Mostly, we discuss developments relevant to finite automata related problems like. for example, (i) simulation of and by several types of finite automata, (ii) standard automata problems such as fixed and general membership, emptiness, universality, equivalence, and related problems, and (iii) minimization and approximation. We thus come across descriptional and computational complexity issues of finite automata. We do not prove these results but we merely draw attention to the big picture and some of the main ideas involved. (C) 2010 Elsevier Inc. All rights reserved.
The High Level Architecture (HLA) is an architecture standard for constructing federations of distributed simulations that exchange data at run-time. HLA includes interest management capabilities (known as "Data ...
详细信息
The High Level Architecture (HLA) is an architecture standard for constructing federations of distributed simulations that exchange data at run-time. HLA includes interest management capabilities (known as "Data Distribution Management") that reduce the data sent during a federation execution using the simulations' run-time declarations describing the data they plan to send and wish to receive. The total computation associated with Data Distribution Management during the execution of a federation can be separated into four processes: declaring.. matching, connecting, and routing. These processes are defined and the computational complexities of the matching and connecting processes during a federation execution are determined. The matching process requires total time with a lower bound in Omega(n log n) and an upper bound in O(n(2)), where n is the number of run-time data distribution actions performed by the simulations. The commonly used approach to implementing the connecting process, multicast grouping, contains a problem that is NP-complete. (C) 2004 Elsevier B.V. All rights reserved.
We consider Web services defined by orchestrations in the Orc language and two natural quality of services measures, the number of outputs and a discrete version of the first response time. We analyse first those subf...
详细信息
We consider Web services defined by orchestrations in the Orc language and two natural quality of services measures, the number of outputs and a discrete version of the first response time. We analyse first those subfamilies of finite orchestrations in which the measures are well defined and consider their evaluation in both reliable and probabilistic unreliable environments. On those subfamilies in which the QoS measures are well defined, we consider a set of natural related problems and analyse its computational complexity. In general our results show a clear picture of the difficulty of computing the proposed QoS measures with respect to the expressiveness of the subfamilies of Orc. Only in few cases the problems are solvable in polynomial time pointing out the computational difficulty of evaluating QoS measures even in simplified models.
In this paper, we show that the weighted vertex coloring problem can be solved in polynomial on the sum of vertex weights time for {P5,K2,3,K2,3+}{P_5,K_{2,3}, K+_{2,3}\}$$\end{document}-free graphs. As a corollary, t...
详细信息
In this paper, we show that the weighted vertex coloring problem can be solved in polynomial on the sum of vertex weights time for {P5,K2,3,K2,3+}{P_5,K_{2,3}, K<^>+_{2,3}\}$$\end{document}-free graphs. As a corollary, this fact implies polynomial-time solvability of the unweighted vertex coloring problem for {P5,K2,3,K2,3+}P_5,K_{2,3},K<^>+_{2,3}\}$$\end{document}-free graphs. As usual, P5 and K2,3 stands, respectively, for the simple path on 5 vertices and for the biclique with the parts of 2 and 3 vertices, K2,3+ denotes the graph, obtained from a K2,3 by joining its degree 3 vertices with an edge.
The paper deals with the relative robust shortest path problem in a directed arc weighted graph, where arc lengths are specified as intervals containing possible realizations of arc lengths. The complexity status of t...
详细信息
The paper deals with the relative robust shortest path problem in a directed arc weighted graph, where arc lengths are specified as intervals containing possible realizations of arc lengths. The complexity status of this problem has been unknown in the literature. We show that the problem is NP-hard. (C) 2003 Elsevier B.V. All rights reserved.
The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff [ Math. Ann., 79 (1919), pp. 157-179], and packing dimension, developed independently by Tricot [ Math. Proc. Cambrid...
详细信息
The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff [ Math. Ann., 79 (1919), pp. 157-179], and packing dimension, developed independently by Tricot [ Math. Proc. Cambridge Philos. Soc., 91 (1982), pp. 57-74] and Sullivan [ Acta Math., 153 (1984), pp. 259-277]. Both dimensions have the mathematical advantage of being de. fined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems. Lutz [ Proceedings of the 15th IEEE Conference on computational complexity, Florence, Italy, 2000, IEEE Computer Society Press, Piscataway, NJ, 2000, pp. 158-169] has recently proven a simple characterization of Hausdorff dimension in terms of gales, which are betting strategies that generalize martingales. Imposing various computability and complexity constraints on these gales produces a spectrum of effective versions of Hausdorff dimension, including constructive, computable, polynomial-space, polynomial-time, and finite-state dimensions. Work by several investigators has already used these effective dimensions to shed significant new light on a variety of topics in theoretical computer science. In this paper we show that packing dimension can also be characterized in terms of gales. Moreover, even though the usual definition of packing dimension is considerably more complex than that of Hausdorff dimension, our gale characterization of packing dimension is an exact dual of-and every bit as simple as-the gale characterization of Hausdorff dimension. Effectivizing our gale characterization of packing dimension produces a variety of effective strong dimensions, which are exact duals of the effective dimensions mentioned above. In general (and in analogy with the classical fractal dimensions), the effective strong dimension of a set or sequence is at least as great as its effective dimension, with equality for sets or sequences that are suffciently regular. We develop the basic properties of effe
Measurement results (and, more generally, estimates) are never absolutely accurate: there is always an uncertainty, the actual value x is, in general, different from the estimate (x) over tilde. Sometimes, we know the...
详细信息
Measurement results (and, more generally, estimates) are never absolutely accurate: there is always an uncertainty, the actual value x is, in general, different from the estimate (x) over tilde. Sometimes, we know the probability of different values of the estimation error Delta x =(def) (x) over tilde - x, sometimes, we only know the interval of possible values of Delta x, sometimes, we have interval bounds on the cumulative distribution function of Delta x. To compare different measuring instruments, it is desirable to know which of them brings more information i.e. it is desirable to gauge the amount of information. For probabilistic uncertainty, this amount of information is described by Shannon's entropy;similar measures can be developed for interval and other types of uncertainty. In this paper, we analyse the computational complexity of the problem of estimating information amount under different types of uncertainty.
The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experime...
详细信息
The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However-the large number of partial results and heuristics notwithstanding-no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. Weprove two versions of the result: one for frequentist and one for Bayesian statistics.
This paper aims to interpret and formalize Herbert Simon's cognitive notions of bounded rationality, satisficing and heuristics in terms of computability theory and computational complexity theory. Simon's the...
详细信息
This paper aims to interpret and formalize Herbert Simon's cognitive notions of bounded rationality, satisficing and heuristics in terms of computability theory and computational complexity theory. Simon's theory of human problem solving is analyzed in the light of Turing's work on Solvable and Unsolvable Problems. It is suggested here that bounded rationality results from the fact that the deliberations required for searching computationally complex spaces exceed the actual complexity that human beings can handle. The immediate consequence is that satisficing becomes the general criterion of decision makers and heuristics are the procedures used for achieving their goals. In such decision problems, it is demonstrated that bounded rationality and satisficing are more general than orthodox, non-cognitive, Olympian rationality and optimization, respectively, and not the other way about. (C) 2013 Elsevier B. V. All rights reserved.
暂无评论