In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies...
详细信息
In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis) advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MILD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations;and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle;the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
Binary Low Density Parity Check (LDPC) codes have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. The analogous codes defined over finite fields GF(q) of order ...
详细信息
ISBN:
(纸本)0780344081
Binary Low Density Parity Check (LDPC) codes have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. The analogous codes defined over finite fields GF(q) of order q > 2 show significantly improved performance. We present the results of Monte Carlo simulations of the decoding of infinite LDPC Codes which can be used to obtain good constructions for finite Codes. We also present empirical results for the Gaussian channel including a rate 1/4 code with bit error probability of 10(-4) at E-b/N-o = -0.05dB.
The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated ...
详细信息
The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved;indeed the performance is almost as close to the Shannon limit as that of turbo codes.
We present a new Reed-Solomon decoding algorithm, which embodies several refinements of an earlier algorithm. Some portions of this new decoding algorithm operate on symbols of length Igq bits;other portions operate o...
详细信息
We present a new Reed-Solomon decoding algorithm, which embodies several refinements of an earlier algorithm. Some portions of this new decoding algorithm operate on symbols of length Igq bits;other portions operate on somewhat longer symbols. In the worst case, the total number of calculations required by the new decoding algorithm is proportional to nr, where n is the code's block length and r is its redundancy. This worst case workload is very similar to prior algorithms. But in many applications, average-case workload and error-correcting performance are both much better. The input to the new algorithm consists of n received symbols from GF(q), and rr nonnegative real numbers, each of which is the reliability of the corresponding received symbol. Any conceivable errata pattern has a ''score'' equal to the sum of the reliabilities of its locations with nonzero errata values. A max-likelihood decoder would find the minimum score over all possible errata patterns. Our new decoding algorithm finds the minimum score only over a subset of these possible errata patterns. The errata within any candidate errata pattern may be partitioned into ''errors'' and ''erasures,'' depending on whether the corresponding reliabilities are above or below an ''erasure threshold.'' Different candidate errata patterns may have different thresholds, each chosen to minimize its corresponding ERRATA COUNT, which is defined as 2 . (number of errors) c (number of erasures). The new algorithm finds an errata pattern with minimum score among all errata patterns for which ERRATA COUNT less than or equal to r + 1 where r is the redundancy of the RS code. This is one check symbol better than conventional RS decoding algorithms. Conventional algorithms also require that the erasure threshold be set a priori;the new algorithm obtains the best answer over all possible settings of the erasure threshold. Conventional cyclic RS codes have length n = q - 1, and their locations correspond to the nonzero
暂无评论