The autoencoder concept has fostered the reinterpretation and the design of modern communication systems. It consists of an encoder, a channel and a decoder block that modify their internal neural structure in an end-...
详细信息
The autoencoder concept has fostered the reinterpretation and the design of modern communication systems. It consists of an encoder, a channel and a decoder block that modify their internal neural structure in an end-to-end learning fashion. However, the current approach to train an autoencoder relies on the use of the cross-entropy loss function. This approach can be prone to overfitting issues and often fails to learn an optimal system and signal representation (code). In addition, less is known about the autoencoder ability to design channel capacity-approaching codes, i.e., codes that maximize the input-output mutual information under a certain power constraint. The task being even more formidable for an unknown channel for which the capacity is unknown and therefore it has to be learnt. In this paper, we address the challenge of designing capacity-approaching codes by incorporating the presence of the communication channel into a novel loss function for the autoencoder training. In particular, we exploit the mutual information between the transmitted and received signals as a regularization term in the cross-entropy loss function, with the aim of controlling the amount of information stored. By jointly maximizing the mutual information and minimizing the cross-entropy, we propose a theoretical approach that a) computes an estimate of the channel capacity and b) constructs an optimal coded signal approaching it. Theoretical considerations are made on the choice of the cost function and the ability of the proposed architecture to mitigate the overfitting problem. Simulation results offer an initial evidence of the potentiality of the proposed method.
We support some evidence that a long additive MDS code over a finite field must be equivalent to a linear code. More precisely, let C be an F-q-linear (n, q(hk), n - k + 1)(qh) MDS code over F-qh. If k = 3, h is an el...
详细信息
We support some evidence that a long additive MDS code over a finite field must be equivalent to a linear code. More precisely, let C be an F-q-linear (n, q(hk), n - k + 1)(qh) MDS code over F-qh. If k = 3, h is an element of {2, 3}, n > max {q(h-1), hq - 1} + 3, and C has three coordinates from which its projections are equivalent to Fqh-linear codes, we prove that C itself is equivalent to an F-qh-linear code. If k > 3, n > q + k, and there are two disjoint subsets of coordinates whose combined size is at most k - 2 from which the projections of C are equivalent to F-qh-linear codes, we prove that C is equivalent to a code which is linear over a larger field than F-q. (C) 2023 Elsevier Inc. All rights reserved.
Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are...
详细信息
Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask whether confirmed predictions about inputs or predictions errors between internal predictions and inputs are passed on in a hierarchical neural system-while at the same time looking for the neural correlates of coding for errors and predictions. If we do not know exactly what a neural system predicts at any given moment, this results in a circular analysis-as has been criticized correctly. To circumvent such circular analysis, we propose to express information processing strategies (such as predictive coding) by local information-theoretic quantities, such that they can be estimated directly from neural data. We demonstrate our approach by investigating two opposing accounts of predictive coding-like processing strategies, where we quantify the building blocks of predictive coding, namely predictability of inputs and transfer of information, by local active information storage and local transfer entropy. We define testable hypotheses on the relationship of both quantities, allowing us to identify which of the assumed strategies was used. We demonstrate our approach on spiking data collected from the retinogeniculate synapse of the cat (N = 16). Applying our local information dynamics framework, we are able to show that the synapse codes for predictable rather than surprising input. To support our findings, we estimate quantities applied in the partial information decomposition framework, which allow to differentiate whether the transferred information is primarily bottom-up sensory input or information transferred conditionally on the current state of the synapse. Supporting our local information-theoretic results, we find that the synapse preferentially transfers bottom-up information. Many neuroscience st
We present a deterministic algorithm for computing the entire weight distribution of polar codes. As the first step, we derive an efficient recursive procedure to compute the weight distributions that arise in success...
详细信息
ISBN:
(纸本)9781538682098
We present a deterministic algorithm for computing the entire weight distribution of polar codes. As the first step, we derive an efficient recursive procedure to compute the weight distributions that arise in successive cancellation decoding of polar codes along any decoding path. This solves the open problem recently posed by Polyanskaya, Davletshin, and Polyanskii. Using this recursive procedure, we can compute the entire weight distribution of certain polar cosets in time O(n(2)). Any polar code can be represented as a disjoint union of such cosets;moreover, this representation extends to polar codes with dynamically frozen bits. This implies that our methods can be also used to compute the weight distribution of polar codes with CRC precoding, of polarization-adjusted convolutional (PAC) codes and, in fact, general linear codes. However, the number of polar cosets in such representation scales exponentially with a parameter introduced herein, which we call the mixing factor. To reduce the exponential complexity of our algorithm, we make use of the fact that polar codes have a large automorphism group, which includes the lower-triangular affine group LTA (m, 2). We prove that LTA(m, 2) acts transitively on certain subsets of polar codes, thereby drastically reducing the number of polar cosets we need to evaluate. This complexity reduction makes it possible to compute the weight distribution of any polar code of length up to n = 128.
Successive-cancellation list (SCL) decoding is a widely used and studied decoding algorithm for polar codes. For short blocklengths, empirical evidence shows that SCL decoding with moderate list sizes (say, L = 2(gamm...
详细信息
ISBN:
(纸本)9781538682098
Successive-cancellation list (SCL) decoding is a widely used and studied decoding algorithm for polar codes. For short blocklengths, empirical evidence shows that SCL decoding with moderate list sizes (say, L <= 32) closely matches the performance of maximum-likelihood (ML) decoding. Hashemi et al. proved that on the binary erasure channel (BEC), SCL decoding actually coincides with ML decoding for list sizes L >= 2(gamma), where gamma is a new parameter we call the mixing factor. Loosely speaking, the mixing factor counts the number of information bits mixed-in among the frozen bits;more precisely gamma =vertical bar{i is an element of F-c: i <= max{F}}vertical bar, where F C [n] denotes the set of frozen indices. Herein, we extend the aforementioned result of Hashemi et al. from the BEC to arbitrary binary-input memoryless symmetric channels. Our proof is based on capturing all 2(gamma) decoding paths that correspond to the gamma information bits appearing before the last frozen bit, and then finding the most-likely extension for each of these paths efficiently using a nearest coset decoding algorithm introduced herein. Furthermore, we present a hybrid successive-cancellation list (H-SCL) decoding algorithm, which is a hybrid between conventional SCL decoding and nearest coset decoding. We believe that the hybrid algorithm can outperform the conventional SCL decoder with lower decoding complexity.
Since Finite State Machines (FSMs) regulate the overall operations in majority of the digital systems, the security of an entire system can be jeopardized if the FSM is vulnerable to physical attacks. By injecting fau...
详细信息
ISBN:
(纸本)9783981926354
Since Finite State Machines (FSMs) regulate the overall operations in majority of the digital systems, the security of an entire system can be jeopardized if the FSM is vulnerable to physical attacks. By injecting faults into an FSM, an attacker can attain unauthorized access to sensitive states, resulting in information leakage and privilege escalation. One of the powerful fault injection techniques is laser-based fault injection (LFI), which enables an adversary to alter states of individual flip-flops. While standard error correction/detection techniques have been used to protect the FSMs from such fault attacks, their significant overhead makes them unattractive to designers. To keep the overhead minimal, we propose a novel FSM encoding scheme based on decision diagrams that utilizes don't-care states of the FSM. We demonstrate that PATRON outperforms conventional encoding schemes in terms of both security and scalability for popular benchmarks. Finally, we introduce a vulnerability metric to aid the security analysis, which precisely manifests the susceptibility of FSM designs.
Because of the recent applications to distributed storage systems, researchers have introduced a new class of block codes, i.e., locally recoverable (LRC) codes. LRC codes can recover information from erasure(s) by ac...
详细信息
Because of the recent applications to distributed storage systems, researchers have introduced a new class of block codes, i.e., locally recoverable (LRC) codes. LRC codes can recover information from erasure(s) by accessing a small number of erasure-free code symbols and increasing the efficiency of repair processes in large-scale distributed storage systems. In this context, Tamo and Barg first gave a breakthrough by cleverly introducing a good polynomial notion. Constructing good polynomials for locally recoverable codes achieving Singleton-type bound (called optimal codes) is challenging and has attracted significant attention in recent years. This article aims to increase our knowledge of good polynomials for optimal LRC codes. Using tools from algebraic function fields and Galois theory, we continue investigating those polynomials and studying them by developing the Galois theoretical approach initiated by Micheli in 2019. Specifically, we push further the study of a crucial parameter G(f) (of a given polynomial f), which measures how much a polynomial is "good " in the sense of LRC codes. We provide some characterizations of polynomials with minimal Galois groups and prove some properties of finite fields where polynomials exist with a specific size of Galois groups. We also present some explicit shapes of polynomials with small Galois groups. For some particular polynomials f, we give the exact formula of G(f ). (C) 2022 Elsevier Inc. All rights reserved.
We propose a novel reinforcement learning approach to learning lattice codes for MIMO channels. We use the block error rate as a loss function to be minimized and compare the learnt lattices with those obtained from a...
详细信息
ISBN:
(纸本)9781728176055
We propose a novel reinforcement learning approach to learning lattice codes for MIMO channels. We use the block error rate as a loss function to be minimized and compare the learnt lattices with those obtained from algebraic design methods for different SNR ranges. Our results indicate that our learnt lattices achieve close to optimal performance in some cases.
One of the most common operations in signal processing is matrix multiplication. However, it presents a major computational bottleneck when the matrix dimension is high, as can occur for large data size or feature dim...
详细信息
ISBN:
(纸本)9781728176055
One of the most common operations in signal processing is matrix multiplication. However, it presents a major computational bottleneck when the matrix dimension is high, as can occur for large data size or feature dimension. Two different approaches to overcoming this bottleneck are: 1) low rank approximation of the matrix product;and 2) distributed computation. We propose a scheme that combines these two approaches. To enable distributed low rank approximation, we generalize the approximate matrix CR-multiplication to accommodate weighted block sampling, and we introduce a weighted coded matrix multiplication method. This results in novel approximate weighted CR coded matrix multiplication schemes, which achieve improved performance for distributed matrix multiplication and are robust to stragglers.
We consider fault-tolerant boolean formulas in which the output of a faulty gate is short-circuited to one of the gate's inputs. A recent result by Kalai et al. [FOCS 2012] converts any boolean formula into a resi...
详细信息
We consider fault-tolerant boolean formulas in which the output of a faulty gate is short-circuited to one of the gate's inputs. A recent result by Kalai et al. [FOCS 2012] converts any boolean formula into a resilient formula of polynomial size thatworks correctly if less than 1/6 of the gates (on every input-to-output path) are faulty. We improve the result of Kalai et al., and show how to efficiently fortify any boolean formula against a fraction of 1/5 of short-circuit gates per path, with only a polynomial blowup in size. We additionally show that it is impossible to obtain formulas with higher resilience and sub-exponential growth in size. Towards our results, we consider interactive coding schemes when noiseless feedback is present;these produce resilient boolean formulas via a Karchmer-Wigderson relation. We develop a coding scheme that resists corruptions in up to a fraction of 1/5 of the transmissions in each direction of the interactive channel. We further show that such a level of noise is maximal for coding schemes whose communication blowup is sub-exponential. Our coding scheme has taken a surprising inspiration from Blockchain technology.
暂无评论