A motion estimation algorithm based on Phase-Only Correlation (POC) function is proposed. Motion estimation is one of the key tasks in several applications such as video encoding. Motion estimation is the process of f...
详细信息
A motion estimation algorithm based on Phase-Only Correlation (POC) function is proposed. Motion estimation is one of the key tasks in several applications such as video encoding. Motion estimation is the process of finding the vertical and horizontal displacement of a block. The accuracy of this process depends on the search method and matching criteria. The proposed method calculates POC on the entire *** gives us a good estimated of the displacement that occurs within the frame. The size of the peaks depends on the size of the area that is moving towards the direction represented by them. We select a set of peaks that can be coded in a lookup table. Then, Sum of Absolute Difference (SAD) is calculated for each vector within the table. The vector with the lowest SAD for the block is selected as the motion vector. The search method based on POC provides better results compared with Full Search.
This paper presents the power-performance trade off of three different cache compression algorithms. Cache compression improves performance, since the compressed data increases the effective cache capacity by reducing...
详细信息
ISBN:
(纸本)9781424474547
This paper presents the power-performance trade off of three different cache compression algorithms. Cache compression improves performance, since the compressed data increases the effective cache capacity by reducing the cache misses. The unused memory cells can be put into sleep mode to save static power. The increased performance and saved power due to cache compression must be more than the delay and power consumption added due to CODEC (Compressor and Decompressor) block respectively. Among the studied algorithms, powerdelay characteristic of Frequent Pattern compression (FPC) is found to be the most suitable for cache compression.
The development of space telemetry technology has brought forward the need for large capacity memory of any solid-state recorder;data compression therefore, becomes more and more important. The compression feasibility...
详细信息
The development of space telemetry technology has brought forward the need for large capacity memory of any solid-state recorder;data compression therefore, becomes more and more important. The compression feasibility and potentiality of telemetry data are examined by analyzing the statistical characteristics of actual telemetry data recovered from recorders. Aiming at the disadvantage of present data formats in data compression for multi-channel telemetry data acquisition systems, this introduces a data packet structure, and a real-time compression algorithm for low complex hardware design. The principles and implementations of data package compression are described. Simulation results show that this technology can meet the requirements of multi-channel real-time data compression with a high compression ratio and a fast compression speed, which possesses great application value.
Summary form only given. We implemented parallel algorithms for vector quantization (VQ) compression on a shared-memory parallel environment and evaluated the effectiveness of the parallel algorithms. On such a system...
详细信息
Summary form only given. We implemented parallel algorithms for vector quantization (VQ) compression on a shared-memory parallel environment and evaluated the effectiveness of the parallel algorithms. On such a system, we evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and parallel tPNN and find that the parallel tPNN is superior in terms of space complexity, whereas the parallel LBG is superior in terms of time complexity and parallelism
This paper considers online compression algorithms that use at most polylogarithmic space (plogon). These algorithms correspond to compressors in the data stream model. We study the performance attained by these algor...
详细信息
ISBN:
(纸本)9783540958901
This paper considers online compression algorithms that use at most polylogarithmic space (plogon). These algorithms correspond to compressors in the data stream model. We study the performance attained by these algorithms and show they are incomparable with both pushdown compressors and the Lempel-Ziv compression algorithm.
This paper presents two new algorithms for point compression for elliptic curves defined over F 2m , m odd. The first algorithm works for curves with Tr(a) = 1 and offers computational advantages over previous methods...
详细信息
This paper presents two new algorithms for point compression for elliptic curves defined over F 2m , m odd. The first algorithm works for curves with Tr(a) = 1 and offers computational advantages over previous methods. The second algorithm is based on the λ representation of an elliptic point. The proposed algorithms require m bits to compress an elliptic point and can be used for all random binary curves recommended by NIST.
In order to implement the real-time video compression system the compression algorithm should be improved and also need a reasonable control framework which brings no additional overhead and efficient to use of the ha...
详细信息
ISBN:
(纸本)9780769535012
In order to implement the real-time video compression system the compression algorithm should be improved and also need a reasonable control framework which brings no additional overhead and efficient to use of the hardware resources. With the perspective of improve the degree of parallelism of the on-chip peripherals and the core, this paper analyzed the disadvantages of the ping-pong buffers pipeline mechanism at first, and the introduced a improved control framework based on dual-core DSP polling control and ping-pong buffers pipeline to avoid that disadvantages. A H.264 based video encoder used this control framework was implemented on the platform of ADSP-BF561, and the experimental results show that the framework can meet the video compression system's real-time capability and reliability requirements by reducing the coding time falling the frame loss frequency and improving the systems work efficiency.
Several popular lossless image compression algorithms were evaluated for the application of compressing medical infrared images. Lossless JPEG, JPEG-LS, JPEG2000, PNG, and CALIC were tested on an image dataset of 380+...
详细信息
Several popular lossless image compression algorithms were evaluated for the application of compressing medical infrared images. Lossless JPEG, JPEG-LS, JPEG2000, PNG, and CALIC were tested on an image dataset of 380+ thermal images. The results show that JPEG-LS is the algorithm with the best performance, both in terms of compression ratio and compression speed
Identity test is a hypothesis test defined over the class of stationary and ergodic sources,to decide whether a sequence of random variables has originated from a known source or from an unknown source. For an identi...
详细信息
Identity test is a hypothesis test defined over the class of stationary and ergodic sources,to decide whether a sequence of random variables has originated from a known source or from an unknown source. For an identity test proposed by Ryabko and Astola in 2005, that makes use of an arbitrary pointwise universal compression algorithm and π, the null distribution to define the critical region, we have studied the rate at which type-2 error goes to zero as sample size goes to infinity. A formal link is established between this rate and the redundancy rate of the compression algorithm in use for the class of Markov processes by an application of the method of types.
Recent processors utilize a variety of parallel processing technologies to boost its performance, and thus it is required that multimedia applications can be efficiently parallelized and can be easily implemented on s...
详细信息
Recent processors utilize a variety of parallel processing technologies to boost its performance, and thus it is required that multimedia applications can be efficiently parallelized and can be easily implemented on such a processor with parallel processing features. We implemented parallel algorithms for VQ compression on a shared-memory parallel environment and evaluated the effectivess of the parallel algorithms. On such a system, we evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and parallel tPNN and find that the parallel tPNN is superior in terms of space complexity, whereas the parallel LBG is superior in terms of time complexity and parallelism. On the other hand, for a codeword search, the p-dist approach and the c-dist approach with the aggregation of synchronizations are suitable for a small codebook, and the c-dist approach and the p-dist approach with the ADM or the strip-mining method are suitable for a large codebook. However, since the aggregation of synchronizations and the strip-mining method increases the space complexity of the algorithm, the p-dist approach and the c-dist approach are more suitable for a small codebook and for a large codebook, respectively
暂无评论