In this paper we consider the problem of transmitting quantized data while performing an average consensus algorithm. Average consensus algorithms are protocols to compute the average value of all sensor measurements ...
详细信息
ISBN:
(纸本)9781595936387
In this paper we consider the problem of transmitting quantized data while performing an average consensus algorithm. Average consensus algorithms are protocols to compute the average value of all sensor measurements via near neighbors communications. The main motivation for our work is the observation that consensus algorithms offer the perfect example of network communications where there is an increasing correlation between the data exchanged, as the system updates its computations. Henceforth, it is possible to utilize previously exchanged data and current side information to reduce significantly the demands of quantization bit rate for a certain precision. We analyze the case of a network with a topology built as that of a random geometric graph and with links that are assumed to be reliable at a constant bit rate. Numerically we show that in consensus algorithms, increasing number of iterations does not have the effect of increasing the error variance. Thus, we conclude that noisy recursions lead to a consensus if the data correlation is exploited in the messages source encoders and decoders. We briefly state the theoretical results which are parallel to our numerical experiments.
We investigate the coding of multiview images obtained from a set of multiple cameras. To exploit the inter-view correlation, two view-prediction tools have been implemented and used in parallel: a block-based motion ...
详细信息
ISBN:
(纸本)9781424414369
We investigate the coding of multiview images obtained from a set of multiple cameras. To exploit the inter-view correlation, two view-prediction tools have been implemented and used in parallel: a block-based motion compensation scheme and a Depth Image Based Rendering technique (DIBR). Whereas DIBR relies on an accurate depth image, the block-based motion-compensation scheme can be performed without any geometry information. Our encoder adaptively selects the most appropriate prediction scheme using a rate-distortion criterion for an optimal prediction-mode selection. The attractiveness of the algorithm is that the compression algorithm is robust against inaccurately estimated depth images and requires only one single reference camera for fast random-access to different views. We present experimental results for several multiview sequences, that result in a quality improvement of up to 1.4 dB as compared to H.264 compression.
This paper proposes a new compression algorithm for dynamic 3d meshes. In such a sequence of meshes, neighboring vertices have a strong tendency to behave similarly and the degree of dependencies between their locatio...
详细信息
ISBN:
(纸本)9789728865726
This paper proposes a new compression algorithm for dynamic 3d meshes. In such a sequence of meshes, neighboring vertices have a strong tendency to behave similarly and the degree of dependencies between their locations in two successive frames is very large which can be efficiently exploited using a combination of predictive and DCT coders (PDCT). Our strategy gathers mesh vertices of similar motions into clusters, establish a local coordinate frame (LCF) for each cluster and encodes frame by frame and each cluster separately. The vertices of each cluster have small variation over a time relative to the LCF. Therefore, the location of each new vertex is well predicted from its location in the previous frame relative to the LCF of its cluster. The difference between the original and the predicted local coordinates are then transformed into frequency domain using DCT. The resulting DCT coefficients are quantized and compressed with entropy coding. The original sequence of meshes can be reconstructed from only a few non-zero DCT coefficients without significant loss in visual quality. Experimental results show that our strategy outperforms or comes close to other coders.
Theoretical analysis of differential predictive coding (DPC) has almost exclusively focused on scalar quantizers and the high-rate regime for tractability reasons. As a result, the role of noncausal decoding in improv...
详细信息
ISBN:
(纸本)0780385543
Theoretical analysis of differential predictive coding (DPC) has almost exclusively focused on scalar quantizers and the high-rate regime for tractability reasons. As a result, the role of noncausal decoding in improving the quality has been largely ignored in the literature. In this work we conduct a rigorous performance analysis of DPC-based schemes under a simple independent, vector-Gaussian, AR-1 source model and large-block (as opposed to high-rate) asymptotics. This analysis reveals that noncausal decoding can offer a significant relative improvement in the mean squared error (by as much as 3 dB) at medium to low rates (0.1-0.5 bit per sample) for sources having strong temporal correlation. Furthermore, most of this relative improvement can be attained with a modest decoder-latency. At very high and very low rates, the gains are negligible.
Based on the relationship among the peak points and valley points of the probability density function (p.d.f.) of a stochastic process, whose p.d.f. may be multimodal, the drift coefficient of its associated diffusion...
详细信息
Based on the relationship among the peak points and valley points of the probability density function (p.d.f.) of a stochastic process, whose p.d.f. may be multimodal, the drift coefficient of its associated diffusion process, the 'shift back to center' property of the Markov chain and the state transitive value of the chain, the paper introduces the algorithm for constructing the approximating model of the Markov chain of an Ito stochastic differential equation (AMMC). The results of simulations demonstrate that the variance of the prediction error of the AMMC is not only far smaller than that of the Burg lattice predictor, but also very close to constant. These properties of the algorithm are beneficial to predictor and predictive coding.
This paper presents an algorithm for lossy compression of hyperspectral images for implementation on field programmable gate arrays (FPGA). To greatly reduce the bit rate required to code images, linear prediction is ...
详细信息
This paper presents an algorithm for lossy compression of hyperspectral images for implementation on field programmable gate arrays (FPGA). To greatly reduce the bit rate required to code images, linear prediction is used between the bands to exploit the large amount of inter-band correlation. The prediction residual is compressed using the set partitioning in hierarchical trees algorithm. To reduce the complexity of the predictive encoder, this paper proposes a bit plane-synchronized closed loop predictor that does not require full decompression of a previous band at the encoder. The new technique achieves almost the same compression ratio as standard closed loop predictive coding and has a simpler on-board implementation.
An efficient scalable predictive coding method is proposed for the Wyner-Ziv problem, using nested lattice quantization followed by multi-layer Slepian-Wolf coders (SWC) with layered side information. The proposed cod...
详细信息
An efficient scalable predictive coding method is proposed for the Wyner-Ziv problem, using nested lattice quantization followed by multi-layer Slepian-Wolf coders (SWC) with layered side information. The proposed coder can support embedded representation and high coding efficiency by exploiting the high quality version of the previous frame in the enhancement-layer coding of the current frame. Specifically, the decoder generates the enhancement-layer side information with an estimation approach to take into account all the available information to the enhancement layer. On the other hand, a practical switching algorithm is applied at the encoder to simplify the correlation estimation on the channel code design by assuming either the current reconstructed base-layer frame or prior enhancement-layer reconstruction as side information. Experiments based on a DPCM model show great benefits to the enhancement layer reconstruction. The paper also discusses the possible adaptation of this approach to practical video compression.
This paper presents a context-based predictive coding method for lossless compression of video. For this method, we propose a model to estimate level of activity in the prediction context of a pixel. This is measured ...
详细信息
This paper presents a context-based predictive coding method for lossless compression of video. For this method, we propose a model to estimate level of activity in the prediction context of a pixel. This is measured in terms of slope and the same is optimally classified to results in a small number of slope bins. After finding the slope bins, we propose a LS based method to find switched predictors to be associated with the various bins. The set of the predictors are found on a frame-by-frame basis and when it is incorporated in CALIC frame work, the proposed method results in, on an average, a better compression performance than that is obtained using recently published methods - LOPT and M-CALIC. The proposed codec has higher coding complexity but much lower decoding complexity, which is necessary for real-time video decoding. The proposed method of coding, however, has much lower complexity as compared to the LOPT method, which has same order of high coding and decoding complexity.
We propose a linear dimensionality reduction algorithm that selectively preserves task relevant state data for control problems modeled as Markov decision processes. The algorithm works by alternating value function e...
详细信息
We propose a linear dimensionality reduction algorithm that selectively preserves task relevant state data for control problems modeled as Markov decision processes. The algorithm works by alternating value function estimation with basis vector adaptation. The approach is demonstrated on two tasks: a toy task designed to illustrate the key concepts, and a more complex three dimensional navigation task.
This paper presents a novel depth-image coding algorithm that concentrates on the special characteristics of depth images: smooth regions delineated by sharp edges. The algorithm models these smooth regions using piec...
详细信息
This paper presents a novel depth-image coding algorithm that concentrates on the special characteristics of depth images: smooth regions delineated by sharp edges. The algorithm models these smooth regions using piecewise-linear functions and sharp edges by a straight line. To define the area of support for each modeling function, we employ a quadtree decomposition that divides the image into blocks of variable size, each block being approximated by one modeling function containing one or two surfaces. The subdivision of the quadtree and the selection of the type of modeling function is optimized such that a global rate-distortion trade-off is realized. Additionally, we present a predictive coding scheme that improves the coding performance of the quadtree decomposition by exploiting the correlation between each block of the quadtree. Experimental results show that the described technique improves the resulting quality of compressed depth images by 1.5-4 dB when compared to a JPEG-2000 encoder.
暂无评论