The High Efficiency Video Coding standard shows improved compression efficiency in comparison to previous standards at the cost of higher computational complexity. In this paper, a complexity control method for HEVC e...
详细信息
ISBN:
(纸本)9781479924523
The High Efficiency Video Coding standard shows improved compression efficiency in comparison to previous standards at the cost of higher computational complexity. In this paper, a complexity control method for HEVC encoders based on the dynamic adjustment of the number of constrained coding treeblocks is proposed. The method limits the maximum tree depth used in the coding structures based on spatio-temporal correlation in order to decrease the number of evaluations performed in the Rate-Distortion Optimization process. Experimental results show that the proposed method is capable of maintaining the encoding time per frame under a pre-defined target, reaching computational complexity decreases of up to 50% at the cost of an average BD-PSNR loss of 0.26 dB in the worst case scenario.
The notion of graph covers (also referred to as locally bijective homomorphisms) plays an important role in topological graph theory and has found its computer science applications in models of local computation. For ...
详细信息
ISBN:
(数字)9783031433801
ISBN:
(纸本)9783031433795;9783031433801
The notion of graph covers (also referred to as locally bijective homomorphisms) plays an important role in topological graph theory and has found its computer science applications in models of local computation. For a fixed target graph H, the H-Cover problem asks if an input graph G allows a graph covering projection onto H. Despite the fact that the quest for characterizing the computational complexity of H-Cover had been started more than 30 years ago, only a handful of general results have been known so far. In this paper, we present a complete characterization of the computational complexity of covering colored graphs for the case that every equivalence class in the degree partition of the target graph has at most two vertices. We prove this result in a very general form. Following the lines of current development of topological graph theory, we study graphs in the most relaxed sense of the definition - the graphs are mixed (they may have both directed and undirected edges), may have multiple edges, loops, and semi-edges. We show that a strong P/NP-co dichotomy holds true in the sense that for each such fixed target graph H, the H-Cover problem is either polynomial time solvable for arbitrary inputs, or NP-complete even for simple input graphs.
Phase transitions in combinatorial problems have recently been shown [2] to be useful in locating "hard" instances of combinatorial problems. The connection between computational complexity and the existence...
详细信息
ISBN:
(纸本)0769506747
Phase transitions in combinatorial problems have recently been shown [2] to be useful in locating "hard" instances of combinatorial problems. The connection between computational complexity and the existence of phase transitions has been addressed in Statistical Mechanics [2] and Artificial Intelligence [3], but not studied rigorously We take a first step in this direction by investigating the existence of sharp thresholds for the class of generalized satisfiability problems, defined by Schaefer [4]. In the case when ail constraints have a special clausal form we completely characterize the generalized satisfiability problems that have a sharp threshold. While NP-completeness does not imply the sharpness of the threshold, our result suggests that the class of counterexamples is rather limited, as ail such counterexamples can be predicted, with constant success probability by a single procedure.
A well-known approach to inferring phylogenies involves finding a phylogeny with the largest number of characters that are perfectly compatible with it. Variations of this problem depend on whether characters are: cla...
详细信息
Unlike most researches focus on computation reduction, the fast algorithm proposed in this paper aims at maintaining the coding efficiency as high as possible. In the proposed algorithm we employ support vector machin...
详细信息
ISBN:
(纸本)9781509040452
Unlike most researches focus on computation reduction, the fast algorithm proposed in this paper aims at maintaining the coding efficiency as high as possible. In the proposed algorithm we employ support vector machine (SVM) that uses three parameters as features: variances, lowfrequency AC components of DCT and spatially neighboring CU levels for fast CU size decision. In addition, based upon RMD cost we propose an adaptive mode candidates method for further RDO computation. Experimental results demonstrate that average 22% reduction of computational complexity can be achieved but only with 0.09% BD bit rate increase.
In modern years, there has been growing importance in the design, analysis and to resolve extremely complex problems. Because of the complexity of problem variants and the difficult nature of the problems they deal wi...
详细信息
ISBN:
(纸本)9783038351788
In modern years, there has been growing importance in the design, analysis and to resolve extremely complex problems. Because of the complexity of problem variants and the difficult nature of the problems they deal with, it is arguably impracticable in the majority time to build appropriate guarantees about the number of fitness evaluations needed for an algorithm to and an optimal solution. In such situations, heuristic algorithms can solve approximate solutions;however suitable time and space complication take part an important role. In present, all recognized algorithms for NP-complete problems are requiring time that's exponential within the problem size. The acknowledged NP-hardness results imply that for several combinatorial optimization problems there are no efficient algorithms that realize a best resolution, or maybe a close to best resolution, on each instance. The study computational complexity Analysis of Selective Breeding algorithm involves both an algorithmic issue and a theoretical challenge and the excellence of a heuristic.
The purpose of this paper is to define a mathematical model for the study of quantitative problems about translations between universal languages and to investigate such problems. The results derived in this paper dea...
详细信息
In computing systems, an execution entity (job/process/task) may suspend itself when it has to wait for some activities to continue/finish its execution. For real-time embedded systems, such self-suspending behavior h...
详细信息
ISBN:
(纸本)9781509053032
In computing systems, an execution entity (job/process/task) may suspend itself when it has to wait for some activities to continue/finish its execution. For real-time embedded systems, such self-suspending behavior has been shown to cause substantial performance/schedulability degradation in the literature. There are two commonly adopted self-suspending sporadic task models in real-time systems: 1) dynamic self-suspension and 2) segmented self-suspension sporadic task models. A dynamic self-suspending sporadic task is specified with an upper bound on the maximum suspension time for a job (task instance), which allows a job to dynamically suspend itself as long as the suspension upper bound is not violated. By contrast, a segmented self-suspending sporadic task has a predefined execution and suspension pattern in an interleaving manner. Even though some seemingly positive results have been reported for self-suspending task systems, the computational complexity and the theoretical quality (with respect to speedup factors) of fixed-priority preemptive scheduling have not been reported. This paper proves that the schedulability analysis for fixed-priority preemptive scheduling even with only one segmented self-suspending task as the lowest-priority task is coNP-hard in the strong sense. For dynamic self-suspending task systems, we show that the speedup factor for any fixed-priority preemptive scheduling, compared to the optimal schedules, is not bounded by a constant or by the number of tasks, if the suspension time cannot be reduced by speeding up. Such a statement of unbounded speedup factors can also be proved for earliest-deadline-first (EDF), least-laxity-first (LLF), and earliest-deadline- zero-laxity (EDZL) scheduling algorithms. However, if the suspension time can be reduced by speeding up coherently or the suspension time of each task is not comparable with (i.e., sufficiently smaller than) its relative deadline, then we successfully show that rate-monotonic s
This paper aims to analyze the computational complexity by calculating the cost in terms of the total number of primitive operations like AND, OR, XOR, SHIFT per block of data of three major block ciphers: Data Encryp...
详细信息
ISBN:
(纸本)9781728191270
This paper aims to analyze the computational complexity by calculating the cost in terms of the total number of primitive operations like AND, OR, XOR, SHIFT per block of data of three major block ciphers: Data Encryption Standard (DES), Triple Data Encryption Standard (3DES), and Advanced Encryption Standard (AES) that form the backbone of Transport Layer Security (TLS). Further, the proposed analysis can be applied as a universal analysis free from any hardware or architecture.
One of the most important problems in option pricing theory is the valuation and optimal exercise of derivatives with American-style exercise features. These types of derivatives are found in all major financial marke...
详细信息
One of the most important problems in option pricing theory is the valuation and optimal exercise of derivatives with American-style exercise features. These types of derivatives are found in all major financial markets. Simulation is a promising alternative to traditional numerical methods and has many advantages as a framework for valuing American options. Recently, Longstaff and Schwartz presented a simple, yet powerful, least-squares Monte Carlo (LSM) algorithm to approximating the value of US options by simulation. This article provides computational complexity analysis of the LSM algorithm. Essentially, the technique of computational complexity analysis is to break down a computational algorithm into logical modules and analyze the effect on the algorithm of adding or deleting logical modules. computational complexity analysis is important in algorithm design because of structural differences in computer and human logic. Algorithms that seem perfectly natural and logical from the human perspective may sometime be found to contain unnecessary complexity when analysed from the computer's perspective. The results showed that a new algorithm constructed by removing the least-squares module altogether from the LSM algorithm improves not only the computational speed, but also produces results that are more accurate than the LSM.
暂无评论