Given a general source X = {X-n}(n=1)(infinity), sourcecoding is characterized by a pair (phi(n), psi(n)) of encoder phi(n) and decoder psi(n) together with the probability of error epsilon(n) = Pr{psi(n)(phi(n)(X-n)...
详细信息
Given a general source X = {X-n}(n=1)(infinity), sourcecoding is characterized by a pair (phi(n), psi(n)) of encoder phi(n) and decoder psi(n) together with the probability of error epsilon(n) = Pr{psi(n)(phi(n)(X-n)) not equal X-n}. If the length of the encoder output phi(n)(X-n) is fixed, then it is called fixed-lengthsourcecoding, while if the length of the encoder output phi(n)(X-n) is variable, then it is called variable-length source coding. Usually, in the context of fixed-lengthsourcecoding the probability of error epsilon(n) is required to asymptotically vanish (i.e., lim(n-->infinity) epsilon(n) = 0), whereas in the context of variable-length source coding the probability of error epsilon(n) is required to be exactly zero (i.e., epsilon(n) = 0 For All n = 1, 2,....) In contrast to these, we consider in the present paper the problem of variable-length source coding with asymptotically vanishing probability of error (i.e., lim(n-->infinity) epsilon(n) = 0), and establish several fundamental theorems on this new subject.
The variable-length source coding problem allowing the error probability up to some constant is considered for general sources. In this problem, the optimum mean codeword length of variable-length codes has already be...
详细信息
The variable-length source coding problem allowing the error probability up to some constant is considered for general sources. In this problem, the optimum mean codeword length of variable-length codes has already been determined. On the other hand, in this paper, we focus on the overflow (or excess codeword length) probability instead of the mean codeword length. The infimum of overflow thresholds under the constraint that both of the error probability and the overflow probability are smaller than or equal to some constant is called the optimum overflow threshold. In this paper, we first derive finite-length upper and lower bounds on these probabilities so as to analyze the optimum overflow thresholds. Then, by using these bounds, we determine the general formula of the optimum overflow thresholds in both of the first-order and second-order forms. Next, we consider another expression of the derived general formula so as to reveal the relationship with the optimum coding rate in the fixed-lengthsourcecoding problem. Finally, we apply the general formula derived in this paper to the case of stationary memoryless sources.
In variable-lengthcoding, the probability of codeword length per source letter being above (resp. below) a prescribed threshold is called the overflow (resp. the underflow) probability. In this paper, we show that th...
详细信息
In variable-lengthcoding, the probability of codeword length per source letter being above (resp. below) a prescribed threshold is called the overflow (resp. the underflow) probability. In this paper, we show that the infimum achievable threshold given the overflow probability exponent r always coincides with the infimum achievable fixed-lengthcoding rate given the error exponent r, without any assumptions on the source. In the case of underflow probability, we also show the similar results. From these results, we can utilize various theorems and results on the fixed-lengthcoding established by Han for the analysis of overflow and underflow probabilities. Moreover, we generalize the above results to the case with overflow and underflow probabilities of codeword cost.
Suppose we are given a random source and want to use it as a random number generator;at what rate can we generate fair bits from it? We address this question in an information-theoretic setting by allowing for some ar...
详细信息
Suppose we are given a random source and want to use it as a random number generator;at what rate can we generate fair bits from it? We address this question in an information-theoretic setting by allowing for some arbitrarily small but nonzero deviation from ''ideal'' random bits. We prove our results with three different measures of approximation between the ideal and the obtained probability distributions: the variational distance, the d-bar distance, and the normalized divergence. Two different contexts are studied: fixed-length and variable-length random number generation. The fixed-length results of this paper provide an operational characterization of the inf-entropy rate of a source, defined in Han and Verdu [1] and the variable-length results characterize the liminf of the entropy rate, thereby establishing a pleasing duality with the fundamental limits of sourcecoding. A feature of our results is that we do not restrict ourselves to ergodic or to stationary sources.
source distributions that can be encoded with zero redundancy for the case of unequal code symbol costs are examined. These distributions provide a natural generalization of the binary, equal costs case for which thes...
详细信息
source distributions that can be encoded with zero redundancy for the case of unequal code symbol costs are examined. These distributions provide a natural generalization of the binary, equal costs case for which these distributions are the dyadic distributions. These zero redundancy codes have the property that the expected proportion of codeword symbols given by a particular letter is equal to an exponential function of the code letter cost. The converse is not true in general, however partial converse results hold. Maximum-entropy zero-redundancy distributions are easily identified through their connection with unequal cost coding for uniform sources.
暂无评论