In this letter, we show that a concatenated zigzag code can be viewed as a low-density parity-check (LDPC) code. Based on the bipartite graph representation for such a parallel-concatenated code, various sum-product b...
详细信息
In this letter, we show that a concatenated zigzag code can be viewed as a low-density parity-check (LDPC) code. Based on the bipartite graph representation for such a parallel-concatenated code, various sum-product based decoding algorithms are introduced and compared. The results show that the improved versions of sum-product algorithm exhibit better convergence rate while maintaining the essential parallel form.
A novel method for approximating the optimal max* operator used in Log-MAP decoding of turbo and turbo trellis-coded modulation (TTCM) codes is proposed being derived from a well-known inequality, which has not been p...
详细信息
A novel method for approximating the optimal max* operator used in Log-MAP decoding of turbo and turbo trellis-coded modulation (TTCM) codes is proposed being derived from a well-known inequality, which has not been published before. The max* operation is generalized, for the first time, and performed on n > 2 rather than n = 2 arguments, as it is the conventional approach. Complexity comparisons reveal a significant reduction in the number of operations required per decoding step for the proposed approximation, as compared with the optimal Log-MAP algorithm. Performance evaluation results are presented for both turbo and TTCM codes, showing the near optimal performance of the novel approximation method in both the additive white Gaussian noise (AWGN) and uncorrelated, i.e. fully interleaved, Rayleigh fading channels.
We present a new Reed-Solomon decoding algorithm, which embodies several refinements of an earlier algorithm. Some portions of this new decoding algorithm operate on symbols of length Igq bits;other portions operate o...
详细信息
We present a new Reed-Solomon decoding algorithm, which embodies several refinements of an earlier algorithm. Some portions of this new decoding algorithm operate on symbols of length Igq bits;other portions operate on somewhat longer symbols. In the worst case, the total number of calculations required by the new decoding algorithm is proportional to nr, where n is the code's block length and r is its redundancy. This worst case workload is very similar to prior algorithms. But in many applications, average-case workload and error-correcting performance are both much better. The input to the new algorithm consists of n received symbols from GF(q), and rr nonnegative real numbers, each of which is the reliability of the corresponding received symbol. Any conceivable errata pattern has a ''score'' equal to the sum of the reliabilities of its locations with nonzero errata values. A max-likelihood decoder would find the minimum score over all possible errata patterns. Our new decoding algorithm finds the minimum score only over a subset of these possible errata patterns. The errata within any candidate errata pattern may be partitioned into ''errors'' and ''erasures,'' depending on whether the corresponding reliabilities are above or below an ''erasure threshold.'' Different candidate errata patterns may have different thresholds, each chosen to minimize its corresponding ERRATA COUNT, which is defined as 2 . (number of errors) c (number of erasures). The new algorithm finds an errata pattern with minimum score among all errata patterns for which ERRATA COUNT less than or equal to r + 1 where r is the redundancy of the RS code. This is one check symbol better than conventional RS decoding algorithms. Conventional algorithms also require that the erasure threshold be set a priori;the new algorithm obtains the best answer over all possible settings of the erasure threshold. Conventional cyclic RS codes have length n = q - 1, and their locations correspond to the nonzero
A new decoder is proposed to decode the (23, 12, 7) binary Golay-code up to five errors. It is based on the algorithm that can correct up to four errors for the (24, 12, 8) extended Golay-code proposed by Lin et al., ...
详细信息
A new decoder is proposed to decode the (23, 12, 7) binary Golay-code up to five errors. It is based on the algorithm that can correct up to four errors for the (24, 12, 8) extended Golay-code proposed by Lin et al., thereby achieving the soft decoding in the real sense for the Golay-code. For a weight-2 or weight-3 error pattern decoded by the hard decoder for correcting up to three errors, one can find the corresponding 21 weight-4 or weight-5 error patterns and choose the one with the maximum emblematic probability value, which is defined as the product of individual bit-error probabilities corresponding to the non-zero locations of the error pattern as the ultimate choice. Finally, simulation results of this decoder over additive white Gaussian noise (AWGN) channels show that the proposed method provides 0.9 dB coding gain than that of Lin et al.'s algorithm at bit-error rate of 10(-5).
We consider the problem of list decoding from erasures. We establish lower and upper bounds on the rate of a (binary linear) code that can be list decoded with list size L when up to a fraction p of its symbols are ad...
详细信息
We consider the problem of list decoding from erasures. We establish lower and upper bounds on the rate of a (binary linear) code that can be list decoded with list size L when up to a fraction p of its symbols are adversarially erased. Such bounds already exist in the literature, albeit under the label of generalized Hamming weights, and we make their connection to list decoding from erasures explicit. Our bounds show that in the limit of large L, the rate of such a code approaches the "capacity" (1 - p) of the erasure channel. Such nicely list decodable codes are then used as inner codes in a suitable concatenation scheme to give a uniformly constructive family of asymptotically good binary linear codes of rate Omega(epsilon(2)/log(1/epsilon)) that can be efficiently list-decoded using lists of size O(1/epsilon) when an adversarially chosen (1 - epsilon) fraction of symbols are erased, for arbitrary epsilon > 0. This improves previous results in this vein, which achieved a rate of Omega(epsilon(3) log(1/epsilon)).
In this study, the authors deal with the problem of how to effectively approximate the max* operator when having n > 2 input values, with the aim of reducing implementation complexity of conventional Log-MAP turbo ...
详细信息
In this study, the authors deal with the problem of how to effectively approximate the max* operator when having n > 2 input values, with the aim of reducing implementation complexity of conventional Log-MAP turbo decoders. They show that, contrary to previous approaches, it is not necessary to apply the max* operator recursively over pairs of values. Instead, a simple, yet effective, solution for the max* operator is revealed having the advantage of being in non-recursive form and thus, requiring less computational effort. Hardware synthesis results for practical turbo decoders have shown implementation savings for the proposed method against the most recent published efficient turbo decoding algorithms by providing near optimal bit error rate (BER) performance.
The prototype of the isolated words recognition software based on the phonetic decoding method with the Kullback-Leibler divergence is presented. The architecture and basic algorithms of the software are described. Fi...
详细信息
The prototype of the isolated words recognition software based on the phonetic decoding method with the Kullback-Leibler divergence is presented. The architecture and basic algorithms of the software are described. Finally, an example of application to the problem of isolated words recognition is provided.
In this study, the authors investigate the performance of soft-decision decoding of convolutional codes in receivers that employ square-law detection. Traditionally, soft-decision decoding has been considered only in ...
详细信息
In this study, the authors investigate the performance of soft-decision decoding of convolutional codes in receivers that employ square-law detection. Traditionally, soft-decision decoding has been considered only in coherent or differentially-coherent receivers. Over the past few years, the emergence of ultra-wideband (UWB) communication has brought energy detectors to prominence. In this study, the authors derive low-complexity approximations for the log-likelihood ratio (LLR) with a class of square-law detectors in UWB radios. The authors then show that performance improvements, similar to those achievable in coherent detectors, can be obtained even with energy detectors when soft-decisions are employed in a maximum-likelihood decoding algorithm. The authors also investigate the complexity and accuracy of the proposed approximations when the LLR is computed using fixed point arithmetic. An expression for the bit error probability with soft-decision decoding is derived. Several simulation results, including the error rate performance of hard-and soft-decision decoding schemes with the exact and approximate LLR values, are presented.
Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain gamma of L, and are obtained based on a decoding algorithm Which is...
详细信息
Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain gamma of L, and are obtained based on a decoding algorithm Which is an improved version of Kannan's method, The tatter is currently the fastest known method for the decoding of a general lattice, For the decoding of a point x, the proposed algorithm recursively searches inside an n-dimensional rectangular parallelepiped (cube), centered at x, With its edges along the Gram-Schmidt vectors of a proper basis of L. We call algorithms of this type recursive cube search (RCS) algorithms. It is shown that Kannan's algorithm also belongs to this category, The complexity of RCS algorithms is measured in terms of the number of lattice points that need to be examined before a decision is made, To tighten the upper bound on the complexity, We select a lattice basis which is reduced in the sense of Korkin-Zolotarev, It is shown that for any selected basis, the decoding complexity (using RCS algorithms) of any sequence of lattices with possible application in communications (gamma greater than or equal to 1) grows at least exponentially with n and gamma. It is observed that the densest lattices, and almost all of the lattices used in communications, e.g., Barnes-Wall lattices and the Leech lattice, have equal successive minima (ESR I). For the decoding complexity of ESM lattices, a tighter upper bound and a stronger loner bound result are derived.
Sequential decoding can achieve high throughput convolutional decoding with much lower computational complexity when compared with the Viterbi algorithm (VA) at a relatively high signal-to-noise ratio (SNR). A paralle...
详细信息
Sequential decoding can achieve high throughput convolutional decoding with much lower computational complexity when compared with the Viterbi algorithm (VA) at a relatively high signal-to-noise ratio (SNR). A parallel bidirectional Fano algorithm (BFA) decoding architecture is investigated in this paper. In order to increase the utilisation of the parallel BFA decoders, and thus improve the decoding throughput, a state estimation method is proposed which can effectively partition a long codeword into multiple short sub-codewords. The parallel BFA decoding with state estimation architecture is shown to achieve 30-55% decoding throughput improvement compared with the parallel BFA decoding scheme without state estimation. Compared with the VA, the parallel BFA decoding only requires 3-30% computational complexity of that required by the VA with a similar error rate performance.
暂无评论