This study deals with three-level scalar quantisation of a gaussiansource, followed by Huffman encoder, especially suitable for signal compression purposes. The variance-matched and variance-mismatched versions of qu...
详细信息
This study deals with three-level scalar quantisation of a gaussiansource, followed by Huffman encoder, especially suitable for signal compression purposes. The variance-matched and variance-mismatched versions of quantiser are considered. Simple approximate closed-form expressions for performance evaluation in terms of signal-to-quantisation-noise ratio and bit rate are derived using the corresponding Q-function approximations. The accuracy of the derived formulas is tested using the relative error as a measure of performance. It is shown that the derived formulas for variance-matched (for a reference variance) and variance-mismatched (in a wide range of input signal variances) provide higher accuracy in comparison to baselines using other available Q-function approximations or available approximate formulas for non-uniform scalar quantisation of gaussiansource.
When one of the random summands is gaussian, we sharpen the entropy power inequality (EPI) in terms of the strong data processing function for gaussian channels. Among other consequences, this 'strong' EPI gen...
详细信息
When one of the random summands is gaussian, we sharpen the entropy power inequality (EPI) in terms of the strong data processing function for gaussian channels. Among other consequences, this 'strong' EPI generalizes the vector extension of Costa's EPI to non-gaussian channels in a precise sense. This leads to a new reverse EPI and, as a corollary, sharpens Stam's uncertainty principle relating entropy power and Fisher information (or, equivalently, Gross' logarithmic Sobolev inequality). Applications to network information theory are also given, including a short self-contained proof of the rate region for the two-encoder quadratic gaussian source coding problem and a new outer bound for the one-sided gaussian interference channel.
We study a generalization of the successive refinement coding problem called the sequential coding of correlated sources. In successive refinement sourcecoding one first describes the given source using a few bits of...
详细信息
We study a generalization of the successive refinement coding problem called the sequential coding of correlated sources. In successive refinement sourcecoding one first describes the given source using a few bits of information, and then subsequently improves the description of the same source when more information is supplied. Sequential coding differs from successive refinement in that the second-stage encoding involves describing a correlated source as opposed to improving the description of the same source. We introduce the notion of a coupled fidelity criterion to quantify perceived distortion in certain applications of sequential coding. We characterize the achievable rate region far this sourcecoding problem and show that the rate region reduces to the successive refinement rate region when the two sources are the same. Then we consider the specific case of a pair of correlated gaussiansources as an example. We give an explicit characterization that reveals an interesting generalization of a property of successive refinement of a single gaussiansource.
Caching is a technique that alleviates networks during peak hours by transmitting partial information before a request for any is made. In a lossy setting of gaussian databases, we study a single-user model in which g...
详细信息
Caching is a technique that alleviates networks during peak hours by transmitting partial information before a request for any is made. In a lossy setting of gaussian databases, we study a single-user model in which good caching strategies minimize the data still needed on average once the user requests a file. The encoder decides on a caching strategy by weighing the benefit from two key parameters: the prior preference for a file and the correlation among the files. Considering uniform prior preference but correlated files, caching becomes an application of Wyner's common information and Watanabe's total correlation. We show this case triggers a split: caching gaussiansources is a non-convex optimization problem unless one spends enough rate to cache all the common information between files. Combining both correlation and user preference we explicitly characterize the full trade-off when the encoder uses gaussian codebooks in a database of two files: we show that as the size of the cache increases, the encoder should change strategy and increasingly prioritize user preference over correlation. In this specific case we also address the loss in performance incurred if the encoder has no knowledge of the user's preference and show that this loss is bounded.
暂无评论