We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model...
详细信息
We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise-shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically, as the dimension of the lattice vector quantizer and order of the noise-shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and mean square error (MSE) fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise-shaping filter must be minimum phase and have a piecewise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.
We present an end-to-end image compression system based on compressive sensing. The presented system integrates the conventional scheme of compressive sampling (on the entire image) and reconstruction with quantizatio...
详细信息
We present an end-to-end image compression system based on compressive sensing. The presented system integrates the conventional scheme of compressive sampling (on the entire image) and reconstruction with quantization and entropy coding. The compression performance, in terms of decoded image quality versus data rate, is shown to be comparable with JPEG and significantly better at the low rate range. We study the parameters that influence the system performance, including (i) the choice of sensing matrix, (ii) the trade-off between quantization and compression ratio, and (iii) the reconstruction algorithms. We propose an effective method to select, among all possible combinations of quantization step and compression ratio, the ones that yield the near-best quality at any given bit rate. Furthermore, our proposed image compression system can be directly used in the compressive sensing camera, e.g., the single pixel camera, to construct a hardware compressive sampling system.
Lempel-Ziv-Welch (LZW) technique for text compression has been successfully modified to lossless image compression such as GIF. Recently, a new class of text compression, namely, Burrows and Wheeler Transformation (BW...
详细信息
Lempel-Ziv-Welch (LZW) technique for text compression has been successfully modified to lossless image compression such as GIF. Recently, a new class of text compression, namely, Burrows and Wheeler Transformation (BWT) has been developed which gives promising results for text compression. Here, we propose a sub-block interchange lossless compression method which belongs to this block sorting class. Our compression results have outperformed GIF in compression ratios and BWT in compression times when tested with 512x512 pixel 8-bit grey scale images. The comparison of compression ratios and times with GIF, BWT and other popular LZ based compression methods are discussed.
A new neural network architecture is proposed for spatial domain image vector quantization (VQ). The proposed model has a multiple shell structure consisting of binary hypercube feature maps of various dimensions, whi...
详细信息
A new neural network architecture is proposed for spatial domain image vector quantization (VQ). The proposed model has a multiple shell structure consisting of binary hypercube feature maps of various dimensions, which are extended forms of Kohonen's self-organizing feature maps (SOFMs). It is trained so that each shell can contain similar-feature vectors. A partial search scheme using the neighborhood relationship of hypercube feature maps can reduce the computational complexity drastically with marginal coding efficiency degradation. This feature is especially proper for vector quantization of a large block or high dimension. The proposed scheme can also provide edge preserving VQ by increasing the number of shells, because shells far from the origin are trained to contain edge block features.
For efficient coding of bilevel sources with some dominant symbols often found in classification label maps of hyperspectral images, we proposed a novel biased run-length (BRL) coding method, which codes the most prob...
详细信息
For efficient coding of bilevel sources with some dominant symbols often found in classification label maps of hyperspectral images, we proposed a novel biased run-length (BRL) coding method, which codes the most probable symbols separately from other symbols. To determine the conditions in which the BRL coding method would be effective, we conducted an analysis of the method using statistical models. We first analyzed the effect of 2-D blocking of pixels, which were assumed to have generalized Gaussian distributions. The analysis showed that the resulting symbol blocks tended to have lower entropies than the original source without symbol blocking. We then analyzed the BRL coding method applied on the sequence of block symbols characterized by a first-order Markov model. Information-theoretic analysis showed that the BRL coding method tended to generate codewords that have lower entropies than the conventional run-length coding method. Furthermore, numerical simulations on lossless compression of actual data showed improvement of the state of the art. Specifically, end-to-end implementation integrating symbol blocking, BRL, and Huffman coding achieved up to 4.3% higher compression than the JBIG2 standard method and up to 3.2% higher compression than the conventional run-length coding method on classification label maps of the widely used "Indian Pines" dataset.
Sensors play an integral part in the technologically advanced real world. Wireless sensors are which have powered by batteries with limited capacity. Hence energy efficiency is one of the major issues with wireless se...
详细信息
Sensors play an integral part in the technologically advanced real world. Wireless sensors are which have powered by batteries with limited capacity. Hence energy efficiency is one of the major issues with wireless sensors. Many techniques have been proposed in order to improve sensor efficiency. This paper discusses to improve energy efficiency of sensor through data compression. Sequence statistical code based data compression algorithm is being proposed to improve the energy efficiency of sensors. SDC and FOST codes were used in this algorithm in order to achieve better compression ratio. The simulation result was compared with arithmetic data compression techniques. In the proposed algorithm computation process is very simple than arithmetic data compression techniques.
In AVS-P2 video compression standard, similar to MPEG-2, entropy coding firstly assembles two dimensional coefficients of each block into a sequence of (Run, Level) combinations serially. As we know, such the serial r...
详细信息
In AVS-P2 video compression standard, similar to MPEG-2, entropy coding firstly assembles two dimensional coefficients of each block into a sequence of (Run, Level) combinations serially. As we know, such the serial run-length method is usually undesirable for hardware accelerator and thus, this paper proposes an efficient parallel algorithm to Run-Length coding, which can determine the (Run, Level) combinations for one row of coefficients from a block in one clock cycle. In addition, Level-based multiple VLC tables switch mechanism (Context-based VLC) is further introduced in AVS-P2 entropy coding module to identify the big variation of probability distribution of (Run, Level) combinations. As a result, table selection for coding the current Level necessarily depends on the previously coded coefficients. Thus, we propose a parallel Looking-Up Table method, which can select the tables for one row of coefficients from a block in one clock cycle. On the other band, at RDO stage, the calculation of rate term only needs to get the number of bits for each coded signal without the knowledge of its concrete value. Consequently, in hardware design, the Looking-Up Table in pre-coding can be mapped into a series of logic operations and thus much hardware memory can be saved. At the actual entropy coding, we only need to replace the logic operation of pre-coding with the actual 2D-VLC tables. Using our proposed hardware accelerator of A VS entropy coder, the results of simulation and synthesis demonstrate that the computing complexity and memory requirements are both reduced.
Multispectral images have numerous features and a wide range of applications. However, traditional image compression methods, such as JPEG2000 and 3D-SPIHT, do not make effective use of spectral information. We propos...
详细信息
Multispectral images have numerous features and a wide range of applications. However, traditional image compression methods, such as JPEG2000 and 3D-SPIHT, do not make effective use of spectral information. We propose a deep compression framework based on interspectral prediction to take full advantage of spectral correlation when using temporal correlation for interframe prediction in video compression. First, two-dimensional and three-dimensional convolutions were used to obtain spatial and spectral information for predicting the original image. Then, we applied a residual neural network to compress the residual information of the image. Subsequently, a decoder was employed to reconstruct the multispectral image based on the compressed image and residual information. All components were jointly trained by a single loss function that considered the tradeoff between the compression bit rate and decoded image quality. The experimental results showed that our proposed method outperformed other traditional compression algorithms, including JPEG2000, 3D-SPIHT, and PCA+JPEG2000, in terms of peak signal-to-noise ratio and spectral angle and is equivalent to or even better than some image compression algorithms based on deep neural networks. (C) 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
Buffer underflow and overflow problems associated with entropy coding are completely eliminated by effectively imposing reflecting walls at the buffer endpoints. Synchronous operation of the AECQ (adaptive entropy-cod...
详细信息
Buffer underflow and overflow problems associated with entropy coding are completely eliminated by effectively imposing reflecting walls at the buffer endpoints. Synchronous operation of the AECQ (adaptive entropy-coded quantizer) encoder and decoder is examined in detail, and it is shown that synchronous operation is easily achieved without side information. A method is developed to explicitly solve for the buffer-state probability distribution and the resulting average distortion when memoryless buffer-state feedback is used as well as when the source is stationary and memoryless. This method is then used as a tool in the design of low-distortion AECQ systems, with particular attention given to developing source scale-invariant distortion performance. It is shown that the introduction of reflecting buffer walls in a properly designed AECQ system results in a very small rate-distortion performance penalty and that the resulting AECQ system can be an extremely simple and effective solution to the stationary memoryless source-coding problem for a wide range of source types. Operation with nonstationary sources is also examined.
An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). In this correspondence, we use the Lagrangian formulation to show the exis...
详细信息
An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). In this correspondence, we use the Lagrangian formulation to show the existence and study the structure of optimal entropy-constrained quantizers that achieve a point on the lower convex hull of the operational distortion-rate function D-h(R) = inf(Q){D(Q) : H(Q) less than or equal to R}. In general, an optimal entropy-constrained quantizer may have a countably infinite number of codewords. Our main results show that if the tail of the source distribution is sufficiently light (resp., heavy) with respect to the distortion measure, the Lagrangian-optimal entropy-constrained quantizer has a finite (resp., infinite) number of codewords. In particular, for the squared error distortion measure, if the tail of the source distribution is lighter than the tail of a Gaussian distribution, then the Lagrangian-optimal quantizer has only a finite number of codewords, while if the tail is heavier than that of the Gaussian, the Lagrangian-optimal quantizer has an infinite number of codewords.
暂无评论