In this paper, we propose a unified decoding algorithm for linear codes, named doubly extended sum-product algorithm (DESPA). The DESPA is described as a belief propagation algorithm over a generalized normal graph th...
详细信息
ISBN:
(纸本)9781424415632
In this paper, we propose a unified decoding algorithm for linear codes, named doubly extended sum-product algorithm (DESPA). The DESPA is described as a belief propagation algorithm over a generalized normal graph that represents the code based on a partitioned parity-check matrix. In one extreme case, the DESPA minimizes the frame-error-rate;while in another extreme case, the DESPA minimizes the bit-error-rate. In practice, the DESPA can be implemented to make trade-offs between decoding complexity and decoding performance.
In this paper, we propose a new method for computing and applying language model look-ahead in a dynamic network decoder, exploiting the sparseness of backing-off n-gram language models. Only partial (sparse) look-ahe...
详细信息
ISBN:
(纸本)9781457705380
In this paper, we propose a new method for computing and applying language model look-ahead in a dynamic network decoder, exploiting the sparseness of backing-off n-gram language models. Only partial (sparse) look-ahead tables are computed, with a size that depends on the number of words that have an n-gram score in the language model for a specific context, rather than a constant, vocabulary dependent size. Since high order backing-off language models are inherently sparse, this mechanism reduces the runtime- and memory effort of computing the look-ahead tables by magnitudes. A modified decoding algorithm is required to apply these sparse LM look-ahead tables efficiently. We show that sparse LM look-ahead is much more efficient than the classical method, and that full n-gram look-ahead becomes favorable over lower order look-ahead even when many distinct LM contexts appear during decoding.
In this paper, three algebraic decoding algorithms are proposed for the binary quadratic residue (QR) codes generated by irreducible polynomials. The polynomial relations among the syndromes and the coefficients of th...
详细信息
In this paper, three algebraic decoding algorithms are proposed for the binary quadratic residue (QR) codes generated by irreducible polynomials. The polynomial relations among the syndromes and the coefficients of the error-locator polynomials have been computed with Lagrange interpolation formula (LIF). Unlike some previous QR decoders, which may take several iterations to decode a corrupted word, the iteration number of the first two algorithms is at most one. The processes in the first algorithm are the calculation of consecutive syndromes, inverse-free Berlekamp-Massey algorithm (IFBMA), and the Chien search. One of Orsini-Sala's results on the structure of general error-locator polynomials is generalized and applied to derive the second (respectively, third) algorithm that consists of the determination of general error-locator polynomial (respectively, classical error-locator polynomials) and the Chien search. Finally, the (17, 9, 5), (23, 12, 7), and (41, 21, 9) QR decoders are illustrated and their complexity analyses are given.
In this paper, we study nonbinary regular LDPC cycle codes whose parity check matrix H has fixed column weight j = 2 and fixed row weight d. Through graph analysis, we show that the parity check matrix H of a regular ...
详细信息
In this paper, we study nonbinary regular LDPC cycle codes whose parity check matrix H has fixed column weight j = 2 and fixed row weight d. Through graph analysis, we show that the parity check matrix H of a regular cycle code can be put into an equivalent structure in the form of concatenation of row-permuted block-diagonal matrices if.. is even, or, if.. is odd and the code's associated graph contains at least one spanning subgraph that consists of disjoint edges. This equivalent structure of H enables: i) parallel processing in linear-time encoding;ii) considerable resource reduction on the code storage for encoding and decoding;and iii) parallel processing in sequential belief-propagation decoding, which increases the throughput without compromising performance or complexity. On the code's structure design, we propose a novel design methodology based on the equivalent structure of H. Finally, we present various numerical results on the code performance and the decoding complexity.
Aiming at the existing problems with GA (genetic algorithm) for solving a flexible job-shop scheduling problem (FJSP), such as description model disunity, complicated coding and decoding methods, a FJSP solution metho...
详细信息
Aiming at the existing problems with GA (genetic algorithm) for solving a flexible job-shop scheduling problem (FJSP), such as description model disunity, complicated coding and decoding methods, a FJSP solution method based on GA is proposed in this paper, and job-shop scheduling problem (JSP) with partial flexibility and JIT Oust-in-time) request is transformed into a general FJSP. Moreover, a unified mathematical model is given. Through the improvement of coding rules, decoding algorithm, crossover and mutation operators, the modified GA's convergence and search efficiency have been enhanced. The example analysis proves the proposed methods can make FJSP converge to the optimal solution steadily, exactly, and efficiently.
In this paper, the characteristics of irregular quasi-cyclic-low-density parity check (QC-LDPC) codes are examined when they are applied on a highly impulsive noise channel, such as the power-line-communications (PLC)...
详细信息
In this paper, the characteristics of irregular quasi-cyclic-low-density parity check (QC-LDPC) codes are examined when they are applied on a highly impulsive noise channel, such as the power-line-communications (PLC) channel. We study two decoding algorithms: 1) the sum product and 2) the bit-flipping algorithm, and how they affect the system's performance. LDPC codes are introduced in combination with other coding schemes, such as Reed-Solomon and convolutional codes. We propose irregular QC-LDPC codes as outer codes for the PLC channel in combination with Reed-Solomon codes, due to their decoding characteristics. In addition, various code rates are used for each different coding scenario. We also test how common Reed-Solomon codes affect the system's performance, such as the RS(63, 53), RS(511, 431), RS(127, 107), and RS(255, 239) codes. Furthermore, we propose an altered version of the sum-product decoding algorithm to enable its operation when QC-LDPC codes are used as the outer coding scheme in combination with Reed-Solomon codes. Regarding the system's design, the orthogonal frequency-division multiplexing transmission technique is utilized. We also take Zimmermann's model into consideration for the PLC channel and Middleton's noise model.
In nuclear medicine, images of planar scintigraphy and single photon emission computerized tomography (SPECT) obtained through gamma camera (GC) appear to be blurred. Alternatively, coded aperture imaging (CAI) can su...
详细信息
ISBN:
(纸本)9781424441242
In nuclear medicine, images of planar scintigraphy and single photon emission computerized tomography (SPECT) obtained through gamma camera (GC) appear to be blurred. Alternatively, coded aperture imaging (CAI) can surpass the quality of GC images, but still it is not extensively used due to the decoding complexity of some images and the difficulty in controlling the noise. Summing up, the images obtained through GC are low quality and it is still difficult to implement CAI technique. Here we present a full aperture imaging (FAI) technique which overcomes the problems of CAI ordinary systems. The gamma radiation transmitted through a large single aperture is edge-encoded, taking advantage of the fact that nuclear radiation is spatially incoherent. The novel technique is tested by means of Monte Carlo method with simple and complex sources. Spatial resolution tests and parallax tests of GC versus FAI were made, and three-dimensional capacities of GC versus FAI were analyzed. Simulations have allowed comparison of both techniques under ideal, identical conditions. The results show that FAI technique has greater sensitivity (similar to 100 times) and greater spatial resolution (> 2.6 times at 40 cm source-detector distance) than that of GC. FAI technique allows to obtain images with typical resolution of GC short source-detector distance but at longer source-detector distance. The FAI decoding algorithm simultaneously reconstructs four different projections, while GC produces only one projection per acquisition. Our results show it is possible to apply an extremely simple encoded imaging technique, and get three-dimensional radioactivity information. Thus GC-based systems could be substituted, given that FAI technique is simple and it produces four images which may feed stereoscopic systems, substituting in some cases, tomographic reconstructions.
A quantization schema with negligible degradation for LDPC decoder is proposed. It's based on the decoding algorithm deduced from Belief Propagation algorithm on LLR. The schema offers a quantization criterion for...
详细信息
ISBN:
(纸本)9781424458516
A quantization schema with negligible degradation for LDPC decoder is proposed. It's based on the decoding algorithm deduced from Belief Propagation algorithm on LLR. The schema offers a quantization criterion for iteration decoder, based on the study of channel outputs. The schema is adaptable for different channel status and quantization scale. Static step lengths for hardware implementation are selected according to the proposed criterion. The performance of the proposed schema is illustrated by both structured and random codes. In simulations for 6-bits quantization, performance loss is negligible. And the schema produces less than 0.01dB degradation for 5-bits quantization above 10(-5) BER and about 0.1dB for 4-bits quantization. There is no error floor of BER even around 10(-7). Simulations show the coverage plays an important role in quantization process. The results of static step lengths are as good as or even better than those of the variable step lengths.
This paper describes a novel technique to exploit duration information for low resource speech recognition systems. Using explicit duration models significantly increases computational cost due to a large search space...
详细信息
ISBN:
(纸本)9781617821233
This paper describes a novel technique to exploit duration information for low resource speech recognition systems. Using explicit duration models significantly increases computational cost due to a large search space. To avoid this problem, most of techniques using duration information adopt two-pass and N-best re-scoring approaches. Meanwhile, we propose an algorithm using word duration models with incremental speech rate normalization for the one-pass decoding approach. In the proposed technique, penalties are only added to scores of words with outlier durations, and not all words need to have duration models. Experimental results show that the proposed technique reduces up to 17% of errors on in-car digit string tasks without significant increase in computational cost.
A quantization schema with negligible degradation for LDPC decoder is proposed. It’s based on the decoding algorithm deduced from Belief Propagation algorithm on LLR. The schema offers a quantization criterion for it...
详细信息
A quantization schema with negligible degradation for LDPC decoder is proposed. It’s based on the decoding algorithm deduced from Belief Propagation algorithm on LLR. The schema offers a quantization criterion for iteration decoder, based on the study of channel outputs. The schema is adaptable for different channel status and quantization scale. Static step lengths for hardware implementation are selected according to the proposed criterion. The performance of the proposed schema is illustrated by both structured and random codes. In simulations for 6-bits quantization, performance loss is negligible. And the schema produces less than 0.01dB degradation for 5-bits quantization above 10?5 BER and about 0.1dB for 4-bits quantization. There is no error floor of BER even around 10?7. Simulations show the coverage plays an important role in quantization process. The results of static step lengths are as good as or even better than those of the variable step lengths.
暂无评论