In this letter, we show that Huffman's source coding method is not optimal for cache-aided networks. To that end, we propose an optimal algorithm for the cache-aided source coding problem. We define cache-aided en...
详细信息
In this letter, we show that Huffman's source coding method is not optimal for cache-aided networks. To that end, we propose an optimal algorithm for the cache-aided source coding problem. We define cache-aided entropy, which represents a lower bound on the average number of transmitted bits for cached-aided networks. A sub-optimal low-complexity cacheaided coding algorithm is presented. In addition, we propose a novel polynomial-time algorithm that obtains the global-optimal source code for wide range of cache sizes. Simulation results show a reduction in the average number of transmitted bits by more than 50% over Huffman's method at moderate cache sizes.
This letter describes a practical Slepian-Wolf source coding scheme based on Low Density Parity Check (LDPC) codes. It considers the realistic setup where the parameters of the statistical model between the source and...
详细信息
This letter describes a practical Slepian-Wolf source coding scheme based on Low Density Parity Check (LDPC) codes. It considers the realistic setup where the parameters of the statistical model between the source and the side information are unknown. A novel Self-Corrected Belief-Propagation (SC-BP) algorithm is proposed in order to make the coding scheme robust to incorrect model parameters by introducing some memory inside the LDPC decoder. A Two Dimensional Density Evolution (2D-DE) analysis is then developed to predict the theoretical performance of the SC-BP decoder. Both the 2D-DE analysis and Monte-Carlo simulations confirm the robustness of the SC-BP decoder. The proposed solution allows for an important complexity reduction and shows a performance very close to existing methods which jointly estimate the model parameters and the source sequence.
Deep neural networks have shown incredible performance for inference tasks in a variety of domains, but require significant storage space, which limits scaling and use for on-device intelligence. This paper is concern...
详细信息
Deep neural networks have shown incredible performance for inference tasks in a variety of domains, but require significant storage space, which limits scaling and use for on-device intelligence. This paper is concerned with finding universal lossless compressed representations of deep feedforward networks with synaptic weights drawn from discrete sets, and directly performing inference without full decompression. The basic insight that allows less rate than naive approaches is recognizing that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation. We provide efficient algorithms to dissipate this irrelevant uncertainty and then use arithmetic coding to nearly achieve the entropy bound in a universal manner. We also provide experimental results of our approach on several standard datasets.
The continuous growth of traffic in the telecommunication networks has motivated the search for optimal source codes that can achieve high percentages of compression of the information to be transmitted. However, the ...
详细信息
ISBN:
(纸本)9781728193656
The continuous growth of traffic in the telecommunication networks has motivated the search for optimal source codes that can achieve high percentages of compression of the information to be transmitted. However, the compression rates are limited in the practice for the type of messages to encode. For this reason, new techniques have been developed in order to improve the compression rates of the traditional algorithms. In particular, source coding techniques based on computational intelligence algorithms are being studied lately. Hence, this paper proposes a new source coding technique for text compression based on two stages: the initial stage uses a deep neural network, called Text Embedding Neural Network, and the second stage uses a Canonical Huffman Code. The deep neural network increases the compression rate by controlling the level of syntax loss allowed in each message through a single adjustable parameter. This combination is able to reduce the size of the transmitted messages up to 30% with relation to only use traditional source coding algorithms.
A geometric formulation is presented for source coding and vector quantizer design. Motivated by the asymptotic equipartition principle, the authors consider two broad classes of source codes and vector quantizers: el...
详细信息
A geometric formulation is presented for source coding and vector quantizer design. Motivated by the asymptotic equipartition principle, the authors consider two broad classes of source codes and vector quantizers: elliptical codes and quantizers based on the Gaussian density function, and pyramid codes and quantizers based on the Laplacian density function. Elliptical and weighted pyramid vector quantizers are developed by selecting codewords as points in a lattice that lie on (or near) a specified ellipse or pyramid. The combination of geometric structure and lattice basis allows simple encoding and decoding algorithms.
We introduce fundamental bounds on achievable cumulative rate distribution functions (CRDF) to characterize a sequential encoding process that ensures lossless or lossy reconstruction subject to an average distortion ...
详细信息
We introduce fundamental bounds on achievable cumulative rate distribution functions (CRDF) to characterize a sequential encoding process that ensures lossless or lossy reconstruction subject to an average distortion criterion using a non-causal decoder. The CRDF describes the rate resources spent sequentially to compress the sequence. We also include a security constraint that affects the set of achievable CRDF. The information leakage is defined sequentially based on the mutual information between the source and its compressed representation, as it evolves. To characterize the security constraints, we introduce the concept of cumulative leakage distribution functions (CLF), which determines the allowed information leakage as distributed over encoded sub-blocks. Utilizing tools from majorization theory, we derive necessary and sufficient conditions on the achievable CRDF for a given independent and identically distributed (IID) source and CLF. One primary result of this article is that the concave-hull of the CRDF characterizes the optimal achievable rate distribution.
Renewal theory provides a way to derive fundamental results about source coding and is useful in the analysis and design of many lossless data compression algorithms. We consider two very different applications of ren...
详细信息
Renewal theory provides a way to derive fundamental results about source coding and is useful in the analysis and design of many lossless data compression algorithms. We consider two very different applications of renewal theory to source coding. The first one results in a variable-length counterpart to the asymptotic equipment property for unifiler Markov sources. The second application leads to the first analysis of variable-to-fixed length codes with plurally parsable dictionaries.
Waveform coding of audio signals at low bit rates generally results in coding errors. In high-quality applications these must remain inaudible. The bit rate required to code audio signals without audible errors depend...
详细信息
Waveform coding of audio signals at low bit rates generally results in coding errors. In high-quality applications these must remain inaudible. The bit rate required to code audio signals without audible errors depends on both the signal's power spectral density function and masking properties of the human ear. It is shown how rate distortion theory and psychoacoustic models of hearing can be used to compute lower bounds to the bit rate of audio signals with inaudible distortion. Subband coding applications to magnetic recording and transmission are discussed in some detail. Performance bounds for this type of subband coding systems are derived.
Probabilistic models of noisy discrete source coding and object classification are studied. For these models, the appropriate minimal information amounts as the functions of a given admissible error probability are de...
详细信息
Probabilistic models of noisy discrete source coding and object classification are studied. For these models, the appropriate minimal information amounts as the functions of a given admissible error probability are defined and the strictly decreasing lower bounds to these functions are constructed. The defined functions are similar to the rate-distortion function known in the information theory and the lower bounds to the these functions yield a minimal error probability subject to a given value of the processed information amount. So, the obtained bounds are the bifactor fidelity criterions in source coding and object classification tasks.
A recent paper (Chapeau-Blondeau et al., 2011) has presented a source coding theorem with a lower bound achieved by the Tsallis entropy. The present author would like to point out a relevant reference (Bercher, 2009),...
详细信息
A recent paper (Chapeau-Blondeau et al., 2011) has presented a source coding theorem with a lower bound achieved by the Tsallis entropy. The present author would like to point out a relevant reference (Bercher, 2009), which includes some similar material. The authors of this paper complement Chapeau-Blondeau by some further comments, and in particular give a practical scheme for computing the optimum codes. In Chapeau-Blondeau (2011), the authors have recalled the result by Campbell (1968), where the classical measure of average length ... is replaced by a β-exponential mean, with β > 0: ... where pi, i = 1. N are the probabilities of the N symbols to be encoded using an alphabet of size D. In this case, the bound that emerges in the extended source coding theorem is the Renyi entropy of order q, Hq( p), with q = 1/(7bgr; + 1), thus conferring on it an operational role: Cβ ≥ Hq(p). (ProQuest: ... denotes formulae/symbols omitted.)
暂无评论