We derive bounds for optimal rate allocation between source and channel coding for linear channel codes that meet the Gilbert-Varshamov or Tsfasman-Vladut-Zink bounds. Formulas giving the high resolution vector quanti...
详细信息
We derive bounds for optimal rate allocation between source and channel coding for linear channel codes that meet the Gilbert-Varshamov or Tsfasman-Vladut-Zink bounds. Formulas giving the high resolution vector quantizer distortion of these systems are also derived. In addition, we give bounds on how far below channel capacity the transmission rate should be for a given delay constraint. The bounds obtained depend on the relationship between channel code rate and relative minimum distance guaranteed by the Gilbert-Varshamov bound, and do not require sophisticated decoding beyond the error correction limit. We demonstrate that the end-to-end mean-squared error decays' exponentially fast as a function of the overall transmission rate, which need not be the case for certain well-known structured codes such as Hamming codes.
We consider the problem of recovering an N-dimensional sparse vector x from its linear transformation y = Dx of M (<N) dimensions. Minimization of the l(1)-norm of x under the constraint y = Dx is a standard approa...
详细信息
We consider the problem of recovering an N-dimensional sparse vector x from its linear transformation y = Dx of M (
We present rigorous results on some open questions on NSRPS, the non-sequential recursive pairs substitution method. In particular, starting from the action of NSRPS on finite strings we de. ne a corresponding natural...
详细信息
We present rigorous results on some open questions on NSRPS, the non-sequential recursive pairs substitution method. In particular, starting from the action of NSRPS on finite strings we de. ne a corresponding natural action on measures and we prove that the iterated measure becomes asymptotically Markov. This certifies the effectiveness of NSRPS as a tool for data compression and entropy estimation.
Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly...
详细信息
Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly attractive as it provides robustness and flexibility in utilizing multi-access channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble;here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. We demonstrate scenarios both in the large size limit and for finite systems in which the composite code has typical performance exceeding those of sparse and dense codes at equivalent signal to noise ratio.
We discuss a strategy of sparse approximation that is based on the use of an overcomplete basis, and evaluate its performance when a random matrix is used as this basis. A small combination of basis vectors is chosen ...
详细信息
We discuss a strategy of sparse approximation that is based on the use of an overcomplete basis, and evaluate its performance when a random matrix is used as this basis. A small combination of basis vectors is chosen from a given overcomplete basis, according to a given compression rate, such that they compactly represent the target data with as small a distortion as possible. As a selection method, we study the l(0)- and l(1)-based methods, which employ the exhaustive search and l(1)-norm regularization techniques, respectively. The performance is assessed in terms of the trade-off relation between the distortion and the compression rate. First, we evaluate the performance analytically in the case that the methods are carried out ideally, using methods of statistical mechanics. The analytical result is then confirmed by performing numerical experiments on finite size systems, and extrapolating the results to the infinite-size limit. Our result clarifies the fact that the l(0)-based method greatly outperforms the l(1)-based one. An interesting outcome of our analysis is that any small value of distortion is achievable for any fixed compression rate r in the large-size limit of the overcomplete basis, for both the l(0)- and l(1)-based methods. The difference between these two methods is manifested in the size of the overcomplete basis that is required in order to achieve the desired value for the distortion. As the desired distortion decreases, the required size grows in a polynomial and an exponential manners for the l(0)- and l(1)-based methods, respectively. Second, we examine the practical performances of two well-known algorithms, orthogonal matching pursuit and approximate message passing, when they are used to execute the l(0)- and l(1)-based methods, respectively. Our examination shows that orthogonal matching pursuit achieves a much better performance than the exact execution of the l(1)-based method, as well as approximate message passing. However, regardin
Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x is an element of R-N from its linear transformation y is an element of R-M of lower dimensionality M < N. A schem...
详细信息
Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x is an element of R-N from its linear transformation y is an element of R-M of lower dimensionality M < N. A scheme further reducing the data size of the compressed expression by using only the sign of each entry of y to recover x was recently proposed. This is often termed 1-bit compressed sensing. Here, we analyze the typical performance of an l(1)-norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l(1)-based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost.
A basic task of information processing is information transfer (flow). Here we study a pair of Brownian particles each coupled to a thermal bath at temperatures T-1 and T-2. The information flow in such a system is de...
详细信息
A basic task of information processing is information transfer (flow). Here we study a pair of Brownian particles each coupled to a thermal bath at temperatures T-1 and T-2. The information flow in such a system is defined via the time-shifted mutual information. The information flow nullifies at equilibrium, and its efficiency is defined as the ratio of the flow to the total entropy production in the system. For a stationary state the information flows from higher to lower temperatures, and its efficiency is bounded from above by (max[T-1, T-2])/(vertical bar T-1-T-2 vertical bar). This upper bound is imposed by the second law and it quantifies the thermodynamic cost for information flow in the present class of systems. It can be reached in the adiabatic situation, where the particles have widely different characteristic times. The efficiency of heat. low-defined as the heat flow over the total amount of dissipated heat-is limited from above by the same factor. There is a complementarity between heat and information flow: the set-up which is most efficient for the former is the least efficient for the latter and vice versa. The above bound for the efficiency can be (transiently) overcome in certain non-stationary situations, but the efficiency is still limited from above. We study yet another measure of information processing (transfer entropy) proposed in the literature. Though this measure does not require any thermodynamic cost, the information flow and transfer entropy are shown to be intimately related for stationary states.
We draw a certain analogy between the classical information-theoretic problem of lossy data compression (sourcecoding) of memoryless information sources and the statistical-mechanical behavior of a certain model of a...
详细信息
We draw a certain analogy between the classical information-theoretic problem of lossy data compression (sourcecoding) of memoryless information sources and the statistical-mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics.
The Parity source Coder is a protocol for data compression which is based on a set of parity checks organized in a sparse random network. We consider here the case of memoryless unbiased binary sources. We show that t...
详细信息
The Parity source Coder is a protocol for data compression which is based on a set of parity checks organized in a sparse random network. We consider here the case of memoryless unbiased binary sources. We show that the theoretical capacity saturates the Shannon limit at large K. We also find that the first corrections to the leading behaviour are exponentially small, with the result that the behaviour at finite K is very close to the optimal one.
In this paper we combine the determinism of the chaotic maps with stochastic components to propose a hybrid algorithm for cryptography. This makes the cipher probabilistic in the sense that each plain text corresponds...
详细信息
In this paper we combine the determinism of the chaotic maps with stochastic components to propose a hybrid algorithm for cryptography. This makes the cipher probabilistic in the sense that each plain text corresponds to many distinct encoded texts, raising the security against known plain text attacks. In particular, the proposed cipher allows an efficient encryption even if a low entropy key is chosen. Also, the algorithm can be efficiently used as a 'many-times pad' cipher.
暂无评论