We propose a hierarchical lossy bilevel image compression method that relies on adaptive cutset sampling (along lines of a rectangular grid with variable block size) and Markov Random Field based reconstruction. It is...
详细信息
ISBN:
(纸本)9781467325332;9781467325349
We propose a hierarchical lossy bilevel image compression method that relies on adaptive cutset sampling (along lines of a rectangular grid with variable block size) and Markov Random Field based reconstruction. It is an efficient encoding scheme that preserves image structure by using a coarser grid in smooth areas of the image and a finer grid in areas with more detail. Experimental results demonstrate that the proposed method performs as well as or better than the fixed-grid approach, and outperforms other lossy bilevel compression methods in its rate-distortion performance.
One key technique for improving the coding efficiency of H.264 video standard is the entropy coder, contextadaptive binary arithmetic coder (CABAC). However the complexity of the encoding process of CABAC is significa...
详细信息
ISBN:
(纸本)0819459763
One key technique for improving the coding efficiency of H.264 video standard is the entropy coder, contextadaptive binary arithmetic coder (CABAC). However the complexity of the encoding process of CABAC is significantly higher than the table driven entropy encoding schemes such as the Huffman coding. CABAC is also bit serial and its multi-bit parallelization is extremely difficult. For a high definition video encoder, multi-giga hertz RISC processors will be needed to implement the CABAC encoder. In this paper, we provide an efficient, pipelined VLSI architecture for CABAC encoding along with an analysis of critical issues. The solution encodes a binary symbol every cycle. An FPGA implementation of the proposed scheme capable of 104 Mbps encoding rate and test results are presented. An ASIC synthesis and simulation for a 0.18 mu m process technology indicates 2 that the design is capable of encoding 190 million binary symbols per second using an area of 0.35 mm(2).*
EPC tags for RFID were designed to accommodate a diversity of many potential possibilities. The generously allocations for the each of the fields in the EPC tags caused these tags to be inefficient, because their size...
详细信息
ISBN:
(纸本)9781467399852
EPC tags for RFID were designed to accommodate a diversity of many potential possibilities. The generously allocations for the each of the fields in the EPC tags caused these tags to be inefficient, because their size is too large. To facilitate shorter EPC tags, an effective compression scheme for EPC tags is suggested with the aim of transmission of fewer bits by RFID equipment.
JPEG2000 is the latest still-image coding standard. It was designed to overcome the limitation of the original JPEG standard and provide high quality images at low bit rates. The JPEG2000 algorithm is fundamentally ba...
详细信息
ISBN:
(纸本)9780769530451
JPEG2000 is the latest still-image coding standard. It was designed to overcome the limitation of the original JPEG standard and provide high quality images at low bit rates. The JPEG2000 algorithm is fundamentally based on the Discrete Wavelet Transform (DWT) and Embedded Block coding with Optimal Truncation (EBCOT). Both algorithms are computation- and memory- intensive. JPEG2000 uses an adapted version of Q-coder, called MQ-coder. Pervious researches mainly focused on speeding up the DWT and BPC other than renormalization part of MQ-coder. Renormalization procedure in MQ-coder contributes to the computation in relatively high percentage. In this paper, we propose an enhanced renormalization algorithm for arithmetic coding in EBCOT Tier-1 coding procedure. By analyzing and simplifying the existing algorithm, the proposed algorithm can reduce the computation with a significant percentage. Experimental results show that the computation is reduced approximately 25% comparing to the renormalization procedure of the regular arithmetic coding.
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic. coding to be largely unsuitabl...
详细信息
ISBN:
(纸本)0819454990
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic. coding to be largely unsuitable for encryption. Nevertheless, adaptive modelling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialised, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zero-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
The requirements for safety-related software systems increases rapidly. To detect arbitrary hardware faults, there are applicable coding mechanism, that add redundancy to the software. In this way it is possible to re...
详细信息
ISBN:
(纸本)9788070439876
The requirements for safety-related software systems increases rapidly. To detect arbitrary hardware faults, there are applicable coding mechanism, that add redundancy to the software. In this way it is possible to replace conventional multi-channel hardware and so reduce costs. arithmetic codes are one possibility of coded processing and are used in this approach. A further approach to increase fault tolerance is the multiple execution of certain critical parts of software. This kind of time redundancy is easily realized by the parallel processing in an operating system. Faults in the program flow can be monitored. No special compilers, that insert additional generated code into the existing program, are required. The usage of multi-core processors would further increase the performance of such multi-channel software systems. In this paper we present the approach of program flow monitoring combined with coded processing, which is encapsulated in a library of coded data types. The program flow monitoring is indirectly realized by means of an operating system.
Because of the feedback loops caused by iterative operations, MQ arithmetic coder usually acts as the performance bottleneck of the hardware architecture for JPEG2000 algorithm. According to the different features of ...
详细信息
ISBN:
(纸本)9781424424238
Because of the feedback loops caused by iterative operations, MQ arithmetic coder usually acts as the performance bottleneck of the hardware architecture for JPEG2000 algorithm. According to the different features of the loops, this paper adopts different optimizing methods rather than general concurrent techniques to improve the hardware efficiency as well as throughput. Based on careful analysis of data dependency, circuit-level optimizations such as assistant loops, inverse multi branches selection (IMBS) are used to improve clock frequency. For the low hardware utilization caused by variable features of inside dataflow, reorganization of dataflow is performed and both the hardware-efficiency and the throughput are improved. The implementation result shows that the throughput of our design can exceed those based on traditional concurrent techniques. Moreover, the hardware utilization is much higher than exiting architectures.
In this paper, we address the problem of lossless and nearly-lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Trans...
详细信息
ISBN:
(纸本)0819427446
In this paper, we address the problem of lossless and nearly-lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Transformation of the available data, and coding. The purpose of the first stage is to express the data as uncorrelated data in an optimal way. In the second stage, coding is performed by means of an arithmetic coder. In this paper, we discuss two well-known approaches for spatial as well as multispectral compression of SPOT images: 1) The efficiency of several predictive techniques (MAP, CALIC, 3D predictors), are compared, and the advantages of 2D versus 3D error feedback and context modeling are examinated;2) The use of wavelet transforms for lossless multispectral compression are discussed. Then, applications of the above mentionned methods for quincunx sampling are evaluated. Lastly, some results, on how predictive and wavelet techniques behave when nearly-lossless compression is needed, are given.
To improve the JPEG2000 compression standard error resiliency in the wireless environment, the use of ternary MQ arithmetic coders/decoders that are based on the concept of forbidden symbol has been proposed. This pap...
详细信息
ISBN:
(纸本)9781424417650
To improve the JPEG2000 compression standard error resiliency in the wireless environment, the use of ternary MQ arithmetic coders/decoders that are based on the concept of forbidden symbol has been proposed. This paper presents two ternary MQ based techniques to reduce both the computational complexity and the memory requirement during the decoding process, with no or little degradation in the PSNR.
This paper proposes an effective lossless information hiding scheme, in which a host image is quantized firstly to generate spare spaces for hiding secret messages. The proposed scheme applies the complexity analysis ...
详细信息
ISBN:
(纸本)9780769531229
This paper proposes an effective lossless information hiding scheme, in which a host image is quantized firstly to generate spare spaces for hiding secret messages. The proposed scheme applies the complexity analysis of neighboring pixels to predict the number of secret message bits concealed in a pixel. In addition, it reserves the differences between the host image and the quantized image for completely restoring the host image. According to the experimental results, the information capacity of the proposed scheme is 0.9 bpp for the standard Lena while that of Maniccam and Bourbakis's scheme is 0.3 bpp.
暂无评论