A probabilistic algorithm is used to construct large constraint length trellis codes at high spectral efficiencies for use with sequential decoding. Linear trellis codes for two- and four-dimensional constellations wi...
详细信息
A probabilistic algorithm is used to construct large constraint length trellis codes at high spectral efficiencies for use with sequential decoding. Linear trellis codes for two- and four-dimensional constellations with constraint lengths up to 19 are obtained. These codes can achieve 180 degrees rotational invariance. To achieve full 90 degrees rotational invariance, nonlinear trellis codes for four-dimensional constellations with constraint Lengths up to 19 are obtained. In both cases it is shown that the channel cutoff rate bound can be achieved using constraint lengths between 16 and 19 with sequential decoding at a bit-error rate of 10(-5)-10(-6) and that 4.9-5.8 dB real coding gains can be achieved over uncoded systems with the same spectral efficiency.
sequential decoding of the channel code in a trellis-coded modulation system with trellis shaping can be used to reduce the system complexity and to achieve high coding gain with large constraint-length codes. It is s...
详细信息
sequential decoding of the channel code in a trellis-coded modulation system with trellis shaping can be used to reduce the system complexity and to achieve high coding gain with large constraint-length codes. It is shown that almost all the shaping gain that can be achieved when Viterbi decoding is used for the channel code can also be achieved when sequential decoding is used for the channel code. It is also shown that the real shaping gain is a function of both the SNR and the spectral efficiency.
The performance of sequential decoding of long constraint length convolutional codes is evaluated for Rayleigh fading channels. sequential decoding is not practical below a certain theoretical signal-to-noise ratio, a...
详细信息
The performance of sequential decoding of long constraint length convolutional codes is evaluated for Rayleigh fading channels. sequential decoding is not practical below a certain theoretical signal-to-noise ratio, and these theoretical limits are calculated for a number of modulation methods and code rates. As an example, with BPSK modulation, soft decisions and code rate 1/2, the theoretical signal-to-noise ratio per information bit is 5.7 dB. Above this limit the bit error rate can be made arbitrarily small by increasing the constraint length at no significant complexity cost. Furthermore, it is shown that with carefully chosen quantization steps, 8 level uniform quantization gives a negligible loss also for sequential decoding on a Rayleigh fading channel. Simulation results using 8 level quantization correspond well with the theoretical performance bounds. Also, the performance on a correlated channel with finite interleaving has been obtained. With an interleaver depth of 50 x 50 and a normalized doppler frequency equal to 0.01 we are only 0.5 dB away from the performance with perfect interleaving. Finally, bit error rate results show this scheme to compete well with Turbo codes.
A new analysis of sequential decoding is presented that is based on the requirement that a set probability of error P(e) be achieved. The error criterion implies a bounded tree or trellis search region;the shape of th...
详细信息
A new analysis of sequential decoding is presented that is based on the requirement that a set probability of error P(e) be achieved. The error criterion implies a bounded tree or trellis search region;the shape of this is calculated for the case of a binary symmetric channel with crossover probability p and random tree codes of rate R. Since the search region is finite at all combinations of p and R below capacity, there is no cut-off rate phenomenon for any P(e) > 0. We calculate the decoder delay (search depth), path storage size, and the number of algorithm steps for several methods of tree search. These include searches without backtracking and backtracking searches that are depth- and metric-first. The search depth of the non-backtracking decoders satisfies the Gallager reliability exponent for block codes. In average paths searched, the backtracking decoders are much more efficient, but all types require the same peak storage allocation. Comparisons are made to well-known algorithms;in order of efficiency from best to worst these are Fano-like algorithms, the stack algorithm, algorithms like the M-algorithm, and the Viterbi algorithm.
sequential decoding, commonly applied to substitution channels, is a sub-optimal alternative to Viterbi decoding with significantly reduced memory costs. This work describes and analyzes a sequential decoder for convo...
详细信息
sequential decoding, commonly applied to substitution channels, is a sub-optimal alternative to Viterbi decoding with significantly reduced memory costs. This work describes and analyzes a sequential decoder for convolutional codes over channels prone to insertion, deletion, and substitution errors. Our decoder expands the code trellis by a new channel-state variable, called drift state, as proposed by Davey and MacKay. A suitable decoding metric on that trellis for sequential decoding is derived, generalizing the original Fano metric. The decoder is also extended to facilitate the simultaneous decoding of multiple received sequences that arise from a single transmitted sequence. Under low-noise environments, our decoding approach reduces the decoding complexity by multiple orders of magnitude compared to Viterbi's algorithm, albeit at slightly higher bit error rates. An analytical method to determine the computational cutoff rate is also suggested. This analysis is supported by numerical evaluations of bit error rates and computational complexity, compared to optimal Viterbi decoding.
sequential decoding of short length binary codes for the additive white Gaussian noise channel is considered. A variant of the variable-bias term (VBT) metric is introduced, producing useful trade-offs between perform...
详细信息
sequential decoding of short length binary codes for the additive white Gaussian noise channel is considered. A variant of the variable-bias term (VBT) metric is introduced, producing useful trade-offs between performance and computational complexity. Comparisons are made with tail-biting convolutional codes decoded with a wrap-around Viterbi algorithm (WAVA) and with polar codes under successive-cancellation list (SCL) decoding. It is found that sequential decoding with the improved VBT metric has a better performance-complexity tradeoff than tail-biting codes under WAVA decoding (except at low complexities) but a worse performance-complexity tradeoff than polar codes under SCL decoding (except at high complexities).
decoding algorithms are investigated in which unpruned codeword trees are generated from an ordered list of parity checks. The order is computed from the received message, and low-density parity-check codes are used t...
详细信息
decoding algorithms are investigated in which unpruned codeword trees are generated from an ordered list of parity checks. The order is computed from the received message, and low-density parity-check codes are used to help control the growth of the tree. Simulation results are given for the binary erasure channel. They suggest that for small erasure probability, the method is computationally feasible at rates above the computational cutoff rate.
The problem of efficient decoding of polar codes is considered. A low-complexity sequential soft decision decoding algorithm is proposed. It is based on the successive cancellation approach, and it employs most likely...
详细信息
The problem of efficient decoding of polar codes is considered. A low-complexity sequential soft decision decoding algorithm is proposed. It is based on the successive cancellation approach, and it employs most likely codeword probability estimates for selection of a path within the code tree to be extended.
Despite the extreme error-correction performance, the amount of computation of sequential decoding of the polarization-adjusted convolutional (PAC) codes is random. In sequential decoding of convolutional codes, the c...
详细信息
Despite the extreme error-correction performance, the amount of computation of sequential decoding of the polarization-adjusted convolutional (PAC) codes is random. In sequential decoding of convolutional codes, the cutoff rate denotes the region between rates whose average computational complexity of decoding is finite and those which is infinite. In this paper, by benefiting from the polarization and guessing techniques, we prove that the required computation in sequential decoding of pre-transformed polar codes polarizes, and this polarization determines which set of bit positions within the rate profile may result in high computational complexity. Based on this, we propose a technique for taming the Reed-Muller (RM) rate-profile construction, and the performance results demonstrate that the error-correction performance of the PAC codes can achieve the theoretical bounds using the tamed-RM rate-profile construction and requires a significantly lower computational complexity than the RM rate-profile construction.
Although sequential decoding of convolutional codes gives a very small decoding error probability, the overall reliability is limited by the probability P-G of deficient decoding, the term introduced by Jelinek to ref...
详细信息
Although sequential decoding of convolutional codes gives a very small decoding error probability, the overall reliability is limited by the probability P-G of deficient decoding, the term introduced by Jelinek to refer to decoding failures caused mainly by buffer overflow. The number of computational efforts in sequential decoding has the Pareto distribution and it is this "heavy tailed" distribution that characterizes P-G. The heavy tailed distribution appears in many fields and buffer overflow is a typical example of the behaviors in which the heavy tailed distribution plays an important role. In this paper, we give a new bound on a probability in the tail of the heavy tailed distribution and, using the bound, prove the long-standing conjecture on P-G, that is, P-G approximate to constant x 1/(sigma(rho) Nrho-1) for a large speed factor sigma of the decoder and for a large receive buffer size N whenever the coding rate R and rho satisfy E(rho) = rhoR for 0 less than or equal to p less than or equal to 1.
暂无评论