High definition (HD) video decoder processes a great amount of data within a bounded time and thus requires high memory bandwidth. The memory bandwidth will dominate system performance, especially in embedded systems....
详细信息
ISBN:
(纸本)9781467351652;9780769549149
High definition (HD) video decoder processes a great amount of data within a bounded time and thus requires high memory bandwidth. The memory bandwidth will dominate system performance, especially in embedded systems. In this paper, we propose a lossless hybrid display frame compression algorithm called HPE (Hybrid Pixel Encoding) for HD video decoder considering high compression performance and real time operation. The algorithm combines dictionary coding and run length coding to achieve a low compression ratio. We integrate the proposed design into an H. 264 decoder. Experimental results show that the proposed algorithm gains 67.7% of data reduction ratio on average when decoding 1080-HD video. It is much more effective than all previous works.
The present day automotives have variety of attractive features. Incorporating these features is possible with more electronics or embedded systems inside the vehicle. Controller Area Network (CAN) protocol offers low...
详细信息
ISBN:
(纸本)9781467324212
The present day automotives have variety of attractive features. Incorporating these features is possible with more electronics or embedded systems inside the vehicle. Controller Area Network (CAN) protocol offers low cost solution for communication between these embedded systems. These embedded systems communicate on CAN bus via message passing. With limited speed and bandwidth offered by CAN, there are limitations in communication. One of the solutions to overcome these limitations is the use of Data-Reduction (DR) algorithms or techniques. These algorithms make it possible to send fewer amount of data in the given time, thus reducing the bandwidth per message. This paper develops an alternative approach to data reduction technique called Quotient Remainder compression (QRC) algorithm. It provides double or at least equal range to the variation in the parameter as compared to the Enhanced Data Reduction (EDR) algorithm. The compression ratio of QRC is comparable to the earlier algorithms.
This paper presents a study of transforms method used in lossless text compression in order to preprocess the text by exploiting the inner redundancy of the source file. The transform methods are derived from the Star...
详细信息
ISBN:
(纸本)9781424437849
This paper presents a study of transforms method used in lossless text compression in order to preprocess the text by exploiting the inner redundancy of the source file. The transform methods are derived from the Star (*) Transform. LIPT, ILPT, NIT, and LIT applied to text files emphasize their positive effects on a set of test files picked up from the classical corpora of both English and Romanian texts. Experimental results and comparisons with universal lossless compressors were performed, and set of interesting conclusions and recommendations are driven on their basis.
This paper focuses on universal compression of a piecewise stationary source using sequential change detection algorithms. The change detection algorithms that we have considered assume minimal knowledge of the source...
详细信息
ISBN:
(纸本)9781538612248
This paper focuses on universal compression of a piecewise stationary source using sequential change detection algorithms. The change detection algorithms that we have considered assume minimal knowledge of the source and make use of universal estimators of entropy. Here, data in each segment is characterized either by an I.I.D. random process or a first order Markov process. Simulation study of a modified sequential change detection test proposed by Jacob and Bansal [1] is carried out. Next, an algorithm to effectively compress a piece-wise stationary sequence using such change detection algorithms is proposed. Overall compression efficiency achieved with Page's Cumulative Sum (CUSUM) test and the modified change detection test proposed in [1] (JB-Page test) as part of the change detection schemes, are compared. Further, when JB-Page test is used for change detection, four different compression algorithms, namely, Lempel Ziv Welch (LZW), Lempel Ziv (LZ78), Burrows Wheeler Transform (BWT) and Context Tree Weighting (CTW) algorithms are compared based on their impact on overall compression.
Post-silicon validation and debug have gained importance in recent years to track down errors that have escaped the pre-silicon phase. Limited observability of internal signals during post-silicon debug necessitates t...
详细信息
ISBN:
(纸本)9781612846552
Post-silicon validation and debug have gained importance in recent years to track down errors that have escaped the pre-silicon phase. Limited observability of internal signals during post-silicon debug necessitates the storage of signal states in real time. Trace buffers are used to store these states. To increase the debug observation window, it is essential to compress these trace signals, so that trace data over larger number of cycles can be stored in the trace buffer while keeping its size constant. In this paper, we propose several dictionary based compression techniques for trace data compression that takes account of the fact that the difference between golden and erroneous trace data is small. Therefore, the static dictionary selected based on golden trace data can provide notably better compression performance than the dynamic dictionaries selected in the current approaches. This will also significantly reduce the hardware overhead by reducing the dictionary size. Our experimental results demonstrate that our approach can provide up to 60% better compression compared to existing approaches, while reducing the architecture overhead by 84%.
The data compression algorithms are used in different fields to manipulate and store big amount of information. The data exchange between different systems has become crucial nowadays and it actively contributes to th...
详细信息
The data compression algorithms are used in different fields to manipulate and store big amount of information. The data exchange between different systems has become crucial nowadays and it actively contributes to the systems' efficiency. Besides the applications for file archiving, multimedia and communication, compression algorithms are used lately in embedded systems applications as well. In hardware environments with limited resources such as embedded systems, compression algorithms are used to save storage space and to improve the power consumption by reducing the amount of data that needs to be handled. This paper proposes a compression algorithm for GPS navigation data applied on extended grid coordinates. In addition, the paper presents a navigation method for mobile robots which uses compressed coordinates directly. The algorithm is validated by numerical identification of the results in different scenarios.
Summary form only given. We present a detailed description of a lossless compression algorithm intended for use on files with non-uniform character distributions. This algorithm takes advantage of the relatively small...
详细信息
We present a compression algorithm and a streaming protocol designed for streaming of computer-desktop graphics. The encoder has low memory requirements and can be broken into a large number of independent contexts wi...
详细信息
ISBN:
(纸本)9780769551401
We present a compression algorithm and a streaming protocol designed for streaming of computer-desktop graphics. The encoder has low memory requirements and can be broken into a large number of independent contexts with a high degree of data locality. The encoder also uses only simple arithmetic, which makes it amenable to hardware or highly parallel software implementation. The decoder is trivial and requires no memory, which makes it suitable for use on devices with limited computing capabilities. The streaming protocol runs over UDP and has its own unique error recovery mechanism specifically designed for interactive applications.
Encoding images containing mixed content in web compatible formats like PNG, GIF and JPEG poses challenges in maintaining the quality along with good compression. In this paper, we describe a method for improving the ...
详细信息
ISBN:
(纸本)9780769549118
Encoding images containing mixed content in web compatible formats like PNG, GIF and JPEG poses challenges in maintaining the quality along with good compression. In this paper, we describe a method for improving the compression of web compatible compound images while retaining high quality. The proposed method is based on the principle that the compression ratio of a given image depends upon both the image content and the compression algorithm used. Therefore, encoding various tiled regions in a given image with the algorithm appropriate to the content in the region improves the overall compression of the image. Further optimizations like sharing of indexed colors between the various tiles in the same image and removal of duplicate tiles improve the compression. This method would support both lossy and lossless compression. The resulting encoded data can be rendered by a web browser supporting CSS background-cropping or HTML5 canvas APIs.
作者:
Aggoun, A.Brunel Univ
Imaging Technol Grp 3D Sch Engn & Design Uxbridge UB8 3PH Middx England
A compression scheme for Omnidirectional Integral Image data is described which uses a three dimensional DCT to exploit the intra-sub-image correlation together with the horizontal and vertical inter-sub-image correla...
详细信息
ISBN:
(纸本)9781424404681
A compression scheme for Omnidirectional Integral Image data is described which uses a three dimensional DCT to exploit the intra-sub-image correlation together with the horizontal and vertical inter-sub-image correlation, resulting in a very efficient de-correlation of the source intensity distribution. The nature of the recorded intensity distribution data with respect to redundancies present and the structure of the data representing the image object are investigated. A three dimensional scalar quantisation array is applied to the DCT coefficients, which are then entropy encoded by a Huffman-based coder. The results obtained after applying the 3D DCT based scheme to OII data are presented and discussed, and compared with simulations produced using the JPEG scheme.
暂无评论