We consider the distributional connection between the lossy compressed representation of a high-dimensional signal X using a random spherical code and the observation of X under an additive white Gaussian noise (AWGN)...
详细信息
We consider the distributional connection between the lossy compressed representation of a high-dimensional signal X using a random spherical code and the observation of X under an additive white Gaussian noise (AWGN). We show that the Wasserstein distance between a bitrate-R compressed version of X and its observation under an AWGN-channel of signa-to-noise ratio 2(2R) - 1 is bounded in the problem dimension. We utilize this fact to connect the risk of an estimator based on the compressed version of X to the risk attained by the same estimator when fed the AWGN-corrupted version of X. We demonstrate the usefulness of this connection by deriving various novel results for inference problems under compression constraints, including minimax estimation, sparse regression, compressed sensing, and universality of linear estimation in remote sourcecoding.
The Shannon lower bound has been the subject of several important contributions by Berger. This paper surveys Shannon bounds on rate-distortion problems under mean-squared error distortion with a particular emphasis o...
详细信息
The Shannon lower bound has been the subject of several important contributions by Berger. This paper surveys Shannon bounds on rate-distortion problems under mean-squared error distortion with a particular emphasis on Berger's techniques. Moreover, as a new result, the Gray-Wyner network is added to the canon of settings for which such bounds are known. In the Shannon bounding technique, elegant lower bounds are expressed in terms of the source entropy power. Moreover, there is often a complementary upper bound that involves the source variance in such a way that the bounds coincide in the special case of Gaussian statistics. Such pairs of bounds are sometimes referred to as Shannon bounds. The present paper puts Berger's work on many aspects of this problem in the context of more recent developments, encompassing indirect and remote sourcecoding such as the CEO problem, originally proposed by Berger, as well as the Gray-Wyner network as a new contribution.
We consider the problem of estimating a Gaussian random walk from a lossy compression of its decimated version. Hence, the encoder operates on the decimated random walk, and the decoder estimates the original random w...
详细信息
ISBN:
(纸本)9781538605790
We consider the problem of estimating a Gaussian random walk from a lossy compression of its decimated version. Hence, the encoder operates on the decimated random walk, and the decoder estimates the original random walk from its encoded version under a mean squared error (MSE) criterion. It is well-known that the minimal distortion in this problem is attained by an estimate-and-compress (EC) sourcecoding strategy, in which the encoder first estimates the original random walk and then compresses this estimate subject to the bit constraint. In this work, we derive a closed-form expression for this minimal distortion as a function of the bitrate and the decimation factor. Next, we consider a compress-and-estimate (CE) sourcecoding scheme, in which the encoder first compresses the decimated sequence subject to an MSE criterion (with respect to the decimated sequence), and the original random walk is estimated only at the decoder. We evaluate the distortion under CE in a closed form and show that there exists a non-zero gap between the distortion under the two schemes. This difference in performance illustrates the importance of having the decimation factor at the encoder.
We consider a multiterminal sourcecoding problem in which a random source signal is estimated from encoded versions of multiple noisy observations. Each encoded version, however, is compressed so as to minimize a loc...
详细信息
ISBN:
(纸本)9781509018062
We consider a multiterminal sourcecoding problem in which a random source signal is estimated from encoded versions of multiple noisy observations. Each encoded version, however, is compressed so as to minimize a local distortion measure, defined only with respect to the distribution of the corresponding noisy observation. The original source is then estimated from these compressed noisy observations. We denote the minimal distortion under this coding scheme as the compress-and-estimate distortion-rate function (CE-DRF). We derive a single-letter expression for the CE-DRF in the case of an i.i.d source. We evaluate this expression for the case of a Gaussian source observed through multiple parallel AWGN channels and quadratic distortion and in the case of a non-uniform binary i.i.d source observed through multiple binary symmetric channels under Hamming distortion. For the case of a Gaussian source, we compare the performance for centralized encoding versus that of distributed encoding. In the centralized encoding scenario, when the code rates are sufficiently small, there is no loss of performance compared to the indirect source coding distortionrate function, whereas distributed encoding achieves distortion strictly larger then the optimal multiterminal sourcecoding scheme. For the case of a binary source, we show that even with a single observation, the CE-DRF is strictly larger than that of indirect source coding.
暂无评论