Capacity formulas and random-coding exponents are derived for a generalized family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In ...
详细信息
Capacity formulas and random-coding exponents are derived for a generalized family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finite input and output alphabets and random state sequence, and the channel is selected by a hypothetical adversary. Partial information about the state sequence is available to the encoder, adversary, and decoder. The design of the transmitter is subject to a cost constraint. Two families of channels are considered: 1) compound discrete memoryless channels (CDMC), and 2) channels with arbitrary memory, subject to an additive cost constraint, or more generally, to a hard constraint on the conditional type of the channel output given the input. Both problems are closely connected. The random-coding exponent is achieved using a stacked binning scheme and a maximum penalized mutual information decoder, which may be thought of as an empirical generalized maximum a posteriori decoder. For channels with arbitrary memory, the random-coding exponents are larger than their CDMC counterparts. Applications of this study include watermarking, data hiding, communication in presence of partially known interferers, and problems such as broadcast channels, all of which involve the fundamental idea of binning.
We explore the information-theoretic duality between source coding with sideinformation at the decoder and channel coding with side information at the encoder. We begin with a mathematical characterization of the fun...
详细信息
We explore the information-theoretic duality between source coding with sideinformation at the decoder and channel coding with side information at the encoder. We begin with a mathematical characterization of the functional duality between classical source and channelcoding, formulating the precise conditions under which the optimal encoder for one problem is functionally identical to the optimal decoder for the other problem. We then extend this functional duality to the case of coding with sideinformation. By invoking this duality, we are able to generalize the result of Wyner and Ziv [1] relating to no rate loss for source coding with sideinformation from Gaussian to more arbitrary distributions. We consider several examples corresponding to both discrete- and continuous-valued cases to illustrate our formulation. For the Gaussian cases of coding with sideinformation, we invoke geometric arguments to provide further insights into their duality. Our geometric treatment inspires the construction and dual use of practical coset codes for a large class of emerging applications for coding with sideinformation, such as distributed sensor networks, watermarking, and information-hiding communication systems.
We consider a decoder with an erasure option and a variable size list decoder for channels with non-casual sideinformation at the transmitter. First, a universally achievable region of error exponents is offered for ...
详细信息
We consider a decoder with an erasure option and a variable size list decoder for channels with non-casual sideinformation at the transmitter. First, a universally achievable region of error exponents is offered for decoding with an erasure option using a parameterized decoder in the spirit of Csiszar and Korner's decoder. Then, the proposed decoding rule is generalized by extending the range of its parameters to allow variable size list decoding. This extension gives a unified treatment for erasure/list decoding. An achievable region of exponential bounds on the probability of list error and the average number of incorrect messages on the list are given. Relations to Forney's and Csiszar and Korner's decoders for discrete memoryless channel are discussed. These results are obtained by exploring a random binning code with conditionally constant composition codewords proposed by Moulin and Wang, but with a different decoding rule and a modified analysis.
The problem of transmitting a Gaussian source with memory to a digital and a linear-analog receiver, over an arbitrarily colored, non-degraded Gaussian broadcast channel is studied. The main result of this work is a c...
详细信息
The problem of transmitting a Gaussian source with memory to a digital and a linear-analog receiver, over an arbitrarily colored, non-degraded Gaussian broadcast channel is studied. The main result of this work is a complete characterization of the set of achievable distortion pairs at the two receivers given a power constraint at the transmitter. Further, a constructive hybrid uncoded-coded scheme consisting of the cascade of source coding with sideinformation and channel coding with side information systems is shown to achieve the entire power-mean squared error (MSE) distortion region associated with the problem. An interesting operating point in this region is one where the digital receiver obtains the classical point-to-point optimal quality and the analog receiver attains the best possible simultaneously achievable distortion. This problem is motivated by the practical application of the seamless "in-band" digital upgrade of legacy analog transmission systems.
It has recently been discovered that many current applications such as data hiding and watermarking can be posed as the problem of channel coding with side information. As a result there has been considerable interest...
详细信息
ISBN:
(纸本)0819444154
It has recently been discovered that many current applications such as data hiding and watermarking can be posed as the problem of channel coding with side information. As a result there has been considerable interest in designing codes (see Chou et. al,(4) Kesal et. al(5) and Eggers et. al(6)) to try and attain the theoretical capacity of the problem. It was shown by Pradhan et. al that in order to achieve capacity, a powerful channel codebook that partitions into a powerful source codebook should be chosen. The data to be embedded will index the source codebook partition. The constructions that exist in the literature, however, are typically based on powerful channel codebooks and weak source codebook partitions and hence remain at a considerable gap to capacity. In this paper, we present several methods of construction that are based on a powerful channel codebook (i.e. turbo codes) and powerful source codebook partitions (i.e., trellis coded quantization) to try and bridge the gap to capacity. For the Gaussian channel coding with side information (CCSI) problem at a transmission rate of I bit/channel use, our proposed approach comes within 2.72 dB of the information-theoretic capacity established by Costa.(1).
暂无评论