A coding theorem and converse are proved for a large-class of abstract stationary channels with time structure including the result by Kadota and Wyner (1972) on continuoustime real-valued channels as special cases. A...
详细信息
A coding theorem and converse are proved for a large-class of abstract stationary channels with time structure including the result by Kadota and Wyner (1972) on continuoustime real-valued channels as special cases. As main contribution, the coding theorem is proved for a significantly weaker condition on the channel output memory - called total ergodicity with respect to finite alphabet block-memoryless input sources - and under a crucial relaxation of the measurability requirement for the channel. These improvements are achieved by introducing a suitable characterization of information rate capacity. It is shown that the psi-mixing output memory condition used by Kadota and Wyner is quite restrictive and excludes important channel models, in particular for the class of Gaussian channels. In fact, it is proved that for Gaussian (e.g., fading or additive noise) channels, the psi-mixing condition is equivalent to finite output memory. Furthermore, it is demonstrated that the measurability requirement of Kadota and Wyner is not satisfied for relevant continuous-time channel models such as linear filters, whereas the condition used in this paper is satisfied for these models. Moreover, a weak converse is derived for all stationary channels with time structure. Intersymbol interference as well as input constraints are taken into account in a general and flexible way, including amplitude and average power constraints as special case. Formulated in rigorous mathematical terms complete, explicit, and transparent proofs are presented. As a side product a gap in the proof of Kadota and Wyner by a counterexample - is closed by providing a corrected proof of a lemma on the monotonicity of some sequence of normalized mutual information quantities. An operational perspective is taken, and an abstract framework is established, which allows to treat discrete-and continuous-time channels with (possibly infinite input and output) memory and arbitrary alphabets simultaneously in a un
This paper presents three new ensembles of low density generator matrix (LDGM) codes, all of which are systematic codes. We prove that, under the maximum likelihood (ML) decoding, these codes can achieve the capacity ...
详细信息
ISBN:
(纸本)9781509034017
This paper presents three new ensembles of low density generator matrix (LDGM) codes, all of which are systematic codes. We prove that, under the maximum likelihood (ML) decoding, these codes can achieve the capacity of binary-input output-symmetric (BIOS) memoryless channels in terms of biterror rate (BER). The proof technique is different from that for totally random block code ensembles. The recently proposed systematic block Markov superposition (BMST) code is treated as a variant of the presented ensembles, exhibiting capacity approaching performance with iterative belief propagation decoding algorithm.
This paper is concerned with three ensembles of systematic low density generator matrix (LDGM) codes, all of which were provably capacity-achieving in terms of bit error rate (BER). This, however, does not necessarily...
详细信息
ISBN:
(纸本)9781538635995
This paper is concerned with three ensembles of systematic low density generator matrix (LDGM) codes, all of which were provably capacity-achieving in terms of bit error rate (BER). This, however, does not necessarily imply that they achieve the capacity in terms of frame error rate (FER), as seen from a counterexample constructed in this paper. We then show that the first and second ensembles are capacity-achieving under list decoding over binary-input output symmetric (BIOS) memoryless channels. We point out that, in principle, the equivocation due to list decoding can be removed with negligible rate loss by the use of the concatenated codes. Simulation results show that the considered convolutional (spatially-coupled) LDGM code is capacity-approaching with an iterative belief propagation decoding algorithm.
A coding theorem is proved for a class of stationary channels with feedback in which-the output Y-n. = f (X-n-m(n) Z(n-m)(n)) is the function of the current and past in symbols from the channel input X,, and the stati...
详细信息
A coding theorem is proved for a class of stationary channels with feedback in which-the output Y-n. = f (X-n-m(n) Z(n-m)(n)) is the function of the current and past in symbols from the channel input X,, and the stationary ergodic channel noise Z(n). In particular, it is shown that the feedback capacity is equal to [GRAPHICS] where I(X-n -> Y-n) = Sigma(n)(i=1) I(X-i;Y-i vertical bar Yi-1)denotes the Massey directed information from the channel input to the output, and the supremum is taken over all causally conditioned distributions p(x(n)parallel to y(n-1)) = Pi(n)(i=1) p(x(i)vertical bar x(i-1), y(i-1)). The main ideas of the proof are a classical application of the Shannon strategy for coding with side information and a new elementary coding technique for the given channel model without feedback, which is in a sense dual to Gallager's lossy coding of stationary ergodic sources. A similar approach gives a simple alternative proof of coding theorems for finite state channels by Yang-Kav&-Tatikonda, Chen-Berger, and Permuter-Weissman-Goldsmith.
A coding theorem is proved for the secret sharing communication system (SSCS) with two Gaussian wiretap channels. This communication system is an extension of both the SSCS with two noiseless channels and the Gaussian...
详细信息
A coding theorem is proved for the secret sharing communication system (SSCS) with two Gaussian wiretap channels. This communication system is an extension of both the SSCS with two noiseless channels and the Gaussian wiretap channel (GWC). The admissible region of rates and security levels for the SSCS with two GWC's is described by the capacities and secrecy capacities of two GWC's.
There are numerous scenarios in source coding where not only the code length but the importance of each value should also be taken into account. Different from the traditional coding theorems, by adding the importance...
详细信息
There are numerous scenarios in source coding where not only the code length but the importance of each value should also be taken into account. Different from the traditional coding theorems, by adding the importance weights for the length of the codes, we define the average cost of the weighted codeword length as an importance-aware measure of the codes. This novel information theoretical measure generalizes the average codeword length by assigning importance weights for each symbol according to users' concerns through focusing on user's selections. With such definitions, coding theorems of the bounds are derived and the outcomes are shown to be extensions of traditional coding theorems.
Recently, Schmidhuber proposed a new concept of generalized algorithmic complexity. It allows for the description of both finite and infinite sequences. The resulting distributions are true probabilities rather than s...
详细信息
Recently, Schmidhuber proposed a new concept of generalized algorithmic complexity. It allows for the description of both finite and infinite sequences. The resulting distributions are true probabilities rather than semimeasures. We clarify some points for this setting, concentrating on Enumerable Output Machines. As our main result, we prove a strong coding theorem (without logarithmic correction terms), which was left as an open problem. To this purpose, we introduce a more natural definition of generalized complexity. (C) 2004 Elsevier B.V. All rights reserved.
作者:
Kobayashi, KLaboratory of Chromatography
DEPg.Fac.Quimica Universidad Nacional Autonoma de Mexico Circuito interior Cd Universitaria/CP 04510 Mexico D.F.Mexico
A function h(w) is said to be useful for the coding theorem if the coding theorem remains to be true when the lengths \w\ of codewords w in it are replaced with h(w). For a codeword w = a(0)a(1) ... a(m-1) of length m...
详细信息
A function h(w) is said to be useful for the coding theorem if the coding theorem remains to be true when the lengths \w\ of codewords w in it are replaced with h(w). For a codeword w = a(0)a(1) ... a(m-1) of length m and an infinite sequence Q = (q(0), q(1), q(2), ...) of real numbers such that 0 < q(n) less than or equal to 1/2, let \w\(Q) denote the value Sigma(n=0)(m-1) (if a(n) = 0 then -log(2) q(n) else -log(2)(1 - q(n))), that is, -log(2) (the probability that flippings of coins generate x) assuming that the (i + 1)th coin generates a tail (or 0) with probability q(i). It is known that if 0 < lim inf(n-->infinity) q(n) then \w\(Q) is useful for the coding theorem and if lim(n-->infinity) q(n)/(1/2(n)) = 0 then \w\(Q) is not useful, We introduce several variations of the coding theorem and show sufficient conditions for h.(w) not to be useful for these variations, The usefulness is also defined for the expressions that define the Kolmogorov complexity and the universal distribution, We consider tbe relation among the usefulness for the coding theorem, that for the Kolmogorov complexity, and that for the universal distribution.
Kerridge introduced a measure known as inaccuracy for complete probability distributions which is the generalisation of Shannon's entropy. In this paper we study a grouping property of the inaccuracy. Also we have...
详细信息
Kerridge introduced a measure known as inaccuracy for complete probability distributions which is the generalisation of Shannon's entropy. In this paper we study a grouping property of the inaccuracy. Also we have established a coding theorem for personal codes by considering inaccuracy of order a and generalised mean length of order t under the condition .
A coding theorem is proved for a class of stationary channels with feedback in which-the output Y-n. = f (X-n-m(n) Z(n-m)(n)) is the function of the current and past in symbols from the channel input X,, and the stati...
详细信息
ISBN:
(纸本)1424414296
A coding theorem is proved for a class of stationary channels with feedback in which-the output Y-n. = f (X-n-m(n) Z(n-m)(n)) is the function of the current and past in symbols from the channel input X,, and the stationary ergodic channel noise Z(n). In particular, it is shown that the feedback capacity is equal to [GRAPHICS] where I(X-n -> Y-n) = Sigma(n)(i=1) I(X-i;Y-i vertical bar Yi-1)denotes the Massey directed information from the channel input to the output, and the supremum is taken over all causally conditioned distributions p(x(n)parallel to y(n-1)) = Pi(n)(i=1) p(x(i)vertical bar x(i-1), y(i-1)). The main ideas of the proof are a classical application of the Shannon strategy for coding with side information and a new elementary coding technique for the given channel model without feedback, which is in a sense dual to Gallager's lossy coding of stationary ergodic sources. A similar approach gives a simple alternative proof of coding theorems for finite state channels by Yang-Kav&-Tatikonda, Chen-Berger, and Permuter-Weissman-Goldsmith.
暂无评论