There are numerous scenarios in source coding where not only the code length but the importance of each value should also be taken into account. Different from the traditional coding theorems, by adding the importance...
详细信息
There are numerous scenarios in source coding where not only the code length but the importance of each value should also be taken into account. Different from the traditional coding theorems, by adding the importance weights for the length of the codes, we define the average cost of the weighted codeword length as an importance-aware measure of the codes. This novel information theoretical measure generalizes the average codeword length by assigning importance weights for each symbol according to users' concerns through focusing on user's selections. With such definitions, coding theorems of the bounds are derived and the outcomes are shown to be extensions of traditional coding theorems.
A coding theorem and converse are proved for a large-class of abstract stationary channels with time structure including the result by Kadota and Wyner (1972) on continuoustime real-valued channels as special cases. A...
详细信息
A coding theorem and converse are proved for a large-class of abstract stationary channels with time structure including the result by Kadota and Wyner (1972) on continuoustime real-valued channels as special cases. As main contribution, the coding theorem is proved for a significantly weaker condition on the channel output memory - called total ergodicity with respect to finite alphabet block-memoryless input sources - and under a crucial relaxation of the measurability requirement for the channel. These improvements are achieved by introducing a suitable characterization of information rate capacity. It is shown that the psi-mixing output memory condition used by Kadota and Wyner is quite restrictive and excludes important channel models, in particular for the class of Gaussian channels. In fact, it is proved that for Gaussian (e.g., fading or additive noise) channels, the psi-mixing condition is equivalent to finite output memory. Furthermore, it is demonstrated that the measurability requirement of Kadota and Wyner is not satisfied for relevant continuous-time channel models such as linear filters, whereas the condition used in this paper is satisfied for these models. Moreover, a weak converse is derived for all stationary channels with time structure. Intersymbol interference as well as input constraints are taken into account in a general and flexible way, including amplitude and average power constraints as special case. Formulated in rigorous mathematical terms complete, explicit, and transparent proofs are presented. As a side product a gap in the proof of Kadota and Wyner by a counterexample - is closed by providing a corrected proof of a lemma on the monotonicity of some sequence of normalized mutual information quantities. An operational perspective is taken, and an abstract framework is established, which allows to treat discrete-and continuous-time channels with (possibly infinite input and output) memory and arbitrary alphabets simultaneously in a un
This paper is concerned with three ensembles of systematic low density generator matrix (LDGM) codes, all of which were provably capacity-achieving in terms of bit error rate (BER). This, however, does not necessarily...
详细信息
ISBN:
(纸本)9781538635995
This paper is concerned with three ensembles of systematic low density generator matrix (LDGM) codes, all of which were provably capacity-achieving in terms of bit error rate (BER). This, however, does not necessarily imply that they achieve the capacity in terms of frame error rate (FER), as seen from a counterexample constructed in this paper. We then show that the first and second ensembles are capacity-achieving under list decoding over binary-input output symmetric (BIOS) memoryless channels. We point out that, in principle, the equivocation due to list decoding can be removed with negligible rate loss by the use of the concatenated codes. Simulation results show that the considered convolutional (spatially-coupled) LDGM code is capacity-approaching with an iterative belief propagation decoding algorithm.
This paper presents three new ensembles of low density generator matrix (LDGM) codes, all of which are systematic codes. We prove that, under the maximum likelihood (ML) decoding, these codes can achieve the capacity ...
详细信息
ISBN:
(纸本)9781509034017
This paper presents three new ensembles of low density generator matrix (LDGM) codes, all of which are systematic codes. We prove that, under the maximum likelihood (ML) decoding, these codes can achieve the capacity of binary-input output-symmetric (BIOS) memoryless channels in terms of biterror rate (BER). The proof technique is different from that for totally random block code ensembles. The recently proposed systematic block Markov superposition (BMST) code is treated as a variant of the presented ensembles, exhibiting capacity approaching performance with iterative belief propagation decoding algorithm.
A new framework is presented in this paper for proving coding theorems for linear codes, where the systematic bits and the corresponding parity-check bits play different roles. Precisely, the noisy systematic bits are...
详细信息
ISBN:
(纸本)9781665421607;9781665421591
A new framework is presented in this paper for proving coding theorems for linear codes, where the systematic bits and the corresponding parity-check bits play different roles. Precisely, the noisy systematic bits are used to limit the list size of typical codewords, while the noisy parity-check bits are used to select from the list the maximum likelihood codeword. This new framework for linear codes allows that the systematic bits and the parity-check bits are transmitted in different ways and over different channels. In particular, this new framework unifies the source coding theorems and the channel coding theorems. With this framework, we prove that the Bernoulli generator matrix codes (BGMCs) are capacity-achieving over binary-input output symmetric (BIOS) channels and also entropy-achieving for Bernoulli sources.
Recently, the delta-mutual information between uncertain variables has been introduced as a generalization of Nair's non-stochastic mutual information functional [1, 2]. Within this framework, we introduce four di...
详细信息
ISBN:
(纸本)9781538682098
Recently, the delta-mutual information between uncertain variables has been introduced as a generalization of Nair's non-stochastic mutual information functional [1, 2]. Within this framework, we introduce four different notions of capacity and present corresponding coding theorems. Our definitions include an analogue of Shannon's capacity in a non-stochastic setting, and a generalization of the zero-error capacity. The associated coding theorems hold for stationary, memoryless, non-stochastic uncertain channels. These results establish the relationship between the 6-mutual information and our operational definitions, providing a step towards the development of a complete non-stochastic information theory.
The coding theorem for Kolmogorov complexity states that any string sampled from a computable distribution has a description length close to its information content. A coding theorem for resource-bounded Kolmogorov co...
详细信息
ISBN:
(纸本)9798331516758;9798331516741
The coding theorem for Kolmogorov complexity states that any string sampled from a computable distribution has a description length close to its information content. A coding theorem for resource-bounded Kolmogorov complexity is the key to obtaining fundamental results in average-case complexity, yet whether any samplable distribution admits a coding theorem for randomized time-bounded Kolmogorov complexity (rK(poly)) is open and a common bottleneck in the recent literature of meta-complexity. Previous works bypassed this issue by considering probabilistic Kolmogorov complexity (pK(poly)), in which public random bits are assumed to be available. In this paper, we present an efficient coding theorem for randomized Kolmogorov complexity under the non-existence of one-way functions, thereby removing the common bottleneck. This enables us to prove rK(poly) counterparts of virtually all the average-case results that were proved only for pK(poly), and enables the resolution of the following concrete open problems. 1) The existence of a one-way function is characterized by the failure of average-case symmetry of information for randomized time-bounded Kolmogorov complexity, as well as a conditional coding theorem for randomized time-bounded Kolmogorov complexity. This resolves the open problem of Hirahara, Ilango, Lu, Nanashima, and Oliveira (STOC'23). 2) Hirahara, Kabanets, Lu, and Oliveira (CCC'24) showed that randomized time-bounded Kolmogorov complexity admits search-to-decision reductions in the errorless average-case setting over any samplable distribution, and left open whether a similar result holds in the error-prone setting. We resolve this question affirmatively, and as a consequence, characterize the existence of a one-way function by the average-case hardness of computing rK(poly) with respect to an arbitrary samplable distribution, which is an rK(poly) analogue of the pK(poly) characterization of Liu and Pass (CRYPTO'23). The key technical lemma is that any d
We show broad equivalences in the average-case complexity of many different meta-complexity problems, including Kolmogorov complexity, time-bounded Kolmogorov complexity, and the Minimum Circuit Size Problem. These re...
详细信息
ISBN:
(纸本)9781450392648
We show broad equivalences in the average-case complexity of many different meta-complexity problems, including Kolmogorov complexity, time-bounded Kolmogorov complexity, and the Minimum Circuit Size Problem. These results hold for a wide range of parameters (various thresholds, approximation gaps, weak or strong average-case hardness, etc.) and complexity notions, showing the theory of meta-complexity is very robust in the average-case setting. Our results are shown by establishing new and generic connections between meta-complexity and the theory of pseudorandomness and one-way functions. Using these connections, we give the first unconditional characterization of one-way functions based on the average-case hardness of the Minimum Circuit Size Problem. We also give a surprising and clean characterization of one-way functions based on the average-case hardness of (the worst-case uncomputable) Kolmogorov complexity. Moreover, the latter is the first characterization of one-way functions based on the averagecase hardness of a fixed problem on any samplable distribution. We give various applications of these results to the foundations of cryptography and the theory of meta-complexity. For example, we show that the average-case hardness of deciding k-SAT or Clique on any samplable distribution of high enough entropy implies the existence of one-way functions. We also use our results to unconditionally solve various meta-complexity problems in CZK (computational zero-knowledge) on average, and give implications of our results for the classic question of proving NP-hardness for the Minimum Circuit Size Problem.
In this paper, we propose a systematic low density generator matrix (LDGM) code ensemble, which is defined by the Bernoulli process. We prove that, under maximum likelihood (ML) decoding, the proposed ensemble can ach...
详细信息
In this paper, we propose a systematic low density generator matrix (LDGM) code ensemble, which is defined by the Bernoulli process. We prove that, under maximum likelihood (ML) decoding, the proposed ensemble can achieve the capacity of binary-input output symmetric (BIOS) memoryless channels in terms of bit error rate (BER). The proof technique reveals a new mechanism, different from lowering down frame error rate (FER), that the BER can be lowered down by assigning light codeword vectors to light information vectors. The finite length performance is analyzed by deriving an upper bound and a lower bound, both of which are shown to be tight in the high signal-to-noise ratio (SNR) region. To improve the waterfall performance, we construct the systematic convolutional LDGM (SysConv-LDGM) codes by a random splitting process. The SysConv-LDGM codes are easily configurable in the sense that any rational code rate can be realized without complex optimization. As a universal construction, the main advantage of the SysConv-LDGM codes is their near-capacity performance in the waterfall region and predictable performance in the error-floor region that can be lowered down to any target as required by increasing the density of the uncoupled LDGM codes. Numerical results are also provided to verify our analysis.
In the present communication, a parametric R-norm fuzzy entropy of a fuzzy variable and subsequently R-norm fuzzy directed divergence measure of fuzzy variables by taking the credibility distribution measure into acco...
详细信息
In the present communication, a parametric R-norm fuzzy entropy of a fuzzy variable and subsequently R-norm fuzzy directed divergence measure of fuzzy variables by taking the credibility distribution measure into account have been proposed along with their proof of validity. Based on the proposed information measures, some new results along with its various properties and the monotonic behavior have been studied in detail. Further, some important results and inequalities related to their corresponding noiseless coding theorems have been well established. Some particular cases related to the proposed fuzzy information measures and fuzzy mean codeword length in terms of existing results have also been discussed. In view the monotonic behavior of the fuzzy mean codeword length, an application to chose the appropriate value of the parameter R to reduce the fuzzy mean codeword length significantly for optimal code has been presented with a possible variability.
暂无评论