Statistical machine translation is a relatively new approach to the long-standing problem of translating human languages by computer. Current statistical techniques uncover translation rules from bilingual training te...
详细信息
Statistical machine translation is a relatively new approach to the long-standing problem of translating human languages by computer. Current statistical techniques uncover translation rules from bilingual training texts and use those rules to translate new tests. The general architecture is the source-channel model: an English string is statistically generated (source), then statistically transformed into French (channel). In order to translate (or "decode") a French string, we look for the most likely English source. We show that for the simplest form of statistical models, this problem is NP-complete, i.e., probably exponential in the length of the observed sentence. We trace this complexity to factors not present in other decoding problems.
In this letter, we present a design method to find good protograph-based low-density parity-check codes with the numbers of nonzero elements in the parity-check matrix and decoding iterations both strictly limited. We...
详细信息
In this letter, we present a design method to find good protograph-based low-density parity-check codes with the numbers of nonzero elements in the parity-check matrix and decoding iterations both strictly limited. We utilize the protograph-based extrinsic information transfer (PEXIT) analysis to evaluate the error performance and employ the differential evolution algorithm to devise a well-performed coding scheme. In particular, the number of iterations in the PEXIT analysis is set to a small value due to the limited iterations in the decoding process. Both numerical analyses and simulations verify that the resulting codes exhibit better performance than conventional code designs under the limitation of decoding complexity.
This letter investigates the space-time block codes from quasi-orthogonal design as a tradeoff between high transmission rate and low decoding complexity. By studying the role orthogonality plays in space-time block c...
详细信息
This letter investigates the space-time block codes from quasi-orthogonal design as a tradeoff between high transmission rate and low decoding complexity. By studying the role orthogonality plays in space-time block codes, upper bound of transmission rate and lower bound of decoding complexity for quasi-orthogonal design are claimed. From this point of view, novel algorithms are developed to construct specific quasi-orthogonal designs achieving these bounds.
Tail biting trellis codes and block concatenated codes are discussed from random coding arguments. An error exponent and decoding complexity for tail biting random trellis codes are shown. We then propose a block conc...
详细信息
Tail biting trellis codes and block concatenated codes are discussed from random coding arguments. An error exponent and decoding complexity for tail biting random trellis codes are shown. We then propose a block concatenated code constructed by a tail biting trellis inner code and derive an error exponent and decoding complexity for the proposed code. The results obtained by the proposed code show that we can attain a larger error exponent at all rates except for low rates with the same decoding complexity compared with the original concatenated code.
The standard algebraic decoding algorithm of cyclic codes [n;k;d] up to the BCH bound delta = 2t + 1 is very efficient and practical for relatively small n while it becomes unpractical for large n as its computational...
详细信息
ISBN:
(纸本)9781457705953
The standard algebraic decoding algorithm of cyclic codes [n;k;d] up to the BCH bound delta = 2t + 1 is very efficient and practical for relatively small n while it becomes unpractical for large n as its computational complexity is O (nt). Aim of this paper is to show how to make this algebraic decoding computationally more efficient: in the case of binary codes, for example, the complexity of the syndrome computation drops from O (nt) to O (t root n), while the average complexity of the error location drops from O (nt) to max {O (t root n);O (t log(2) (t) log log (t) log (n))}.
The typical method adopted in the decoding of linear network codes is Gaussian Elimination (GE), which enjoys extreme low policy complexity in determining actions, i.e., the XORing operation executed upon the decoding...
详细信息
ISBN:
(纸本)9783030191566;9783030191559
The typical method adopted in the decoding of linear network codes is Gaussian Elimination (GE), which enjoys extreme low policy complexity in determining actions, i.e., the XORing operation executed upon the decoding matrix and the coded packets. However, the amount of the total required actions is quite large, which makes the overall decoding complexity high. In this paper, we consider the problem of minimizing the decoding complexity of binary linear network codes. We formulate the decoding problem into a special shortest path problem where the weight of each edge consists of: (1) a const weight due to the execution of the action;(2) a variable weight due to the adopted policy in determining the action. The policy is formulated as an optimization problem that minimizes a particular objective function by enumerating over a certain action set. Since finding the optimal policy is intractable, we optimize the policy in dual directions. At one hand, we guarantee the objective function and the action set are similar to the optimal policy that minimizes const weight summation;at the other hand, we guarantee that the objective function have simple structure and the action set is small, so that the variable weight summation is also small. Simulation results demonstrate that our proposed policy can significantly reduce the decoding complexity compared with existing methods.
The increased demand for high quality video evidently elevates the bandwidth require- ments of the communication channels being used, which in return demands for more efficient video coding algorithms within the media...
详细信息
The increased demand for high quality video evidently elevates the bandwidth require- ments of the communication channels being used, which in return demands for more efficient video coding algorithms within the media distribution tool chain. As such, High Efficiency Video Coding (HEVC) video coding standard is a potential solution that demonstrates a significant coding efficiency improvement over its predecessors. HEVC constitutes an assortment of novel coding tools and features that contribute towards its superior coding performance, yet at the same time demand more compu- tational, processing and energy resources; a crucial bottleneck, especially in the case of resource constrained Consumer Electronic (CE) devices. In this context, the first contribution in this thesis presents a novel content adaptive Coding Unit (CU) size prediction algorithm for HEVC-based low-delay video encoding. In this case, two in- dependent content adaptive CU size selection models are introduced while adopting a moving window-based feature selection process to ensure that the framework remains robust and dynamically adapts to any varying video content. The experimental results demonstrate a consistent average encoding time reduction ranging from 55% – 58% and 57% – 61% with average Bjontegaard Delta Bit Rate (BDBR) increases of 1.93% – 2.26% and 2.14% – 2.33% compared to the HEVC 16.0 reference software for the low delay P and low delay B configurations, respectively, across a wide range of content types and bit rates. The video decoding complexity and the associated energy consumption are tightly coupled with the complexity of the codec as well as the content being decoded. Hence, video content adaptation is extensively considered as an application layer solution to reduce the decoding complexity and thereby the associated energy consumption. In this context, the second contribution in this thesis introduces a decoding complexity- aware video encoding algorithm for HEVC using a novel d
The energy consumption of consumer electronic (CE) devices during media playback is inexorably linked to the computational complexity of decoding compressed video. Reducing a CE device's the energy consumption is ...
详细信息
The energy consumption of consumer electronic (CE) devices during media playback is inexorably linked to the computational complexity of decoding compressed video. Reducing a CE device's the energy consumption is therefore becoming ever more challenging with the increasing video resolutions and the complexity of the video coding algorithms. To this end, this paper proposes a framework that alters the video bit stream to reduce the decoding complexity and simultaneously limits the impact on the coding efficiency. In this context, this paper first performs an analysis to determine the tradeoff between the decoding complexity, video quality, and bit rate with respect to a reference decoder implementation on a general purpose processor architecture. Thereafter, a novel generic decoding complexity-aware video coding algorithm is proposed to generate decoding complexity-rate-distortion optimized High Efficiency Video Coding (HEVC) bit streams. The experimental results reveal that the bit streams generated by the proposed algorithm achieve 29.43% and 13.22% decoding complexity reductions for a similar video quality with minimal coding efficiency impact compared to the state-of-the-art approaches when applied to the HM16.0 and openHEVC decoder implementations, respectively. In addition, analysis of the energy consumption behavior for the same scenarios reveal up to 20% energy consumption reductions while achieving a similar video quality to that of HM16.0 encoded HEVC bit streams.
Interactive navigation in image-based scenes requires random access to the compressed reference image data. When using state of the art block-based hybrid video coding techniques, the degree of inter and intra block d...
详细信息
ISBN:
(纸本)9781424414369
Interactive navigation in image-based scenes requires random access to the compressed reference image data. When using state of the art block-based hybrid video coding techniques, the degree of inter and intra block dependencies introduced during compression has an impact on the effort required to access reference image data and therefore delimits the response time for interactive applications. In this work a theoretical model for the decoding-complexity of compressed image-based scene representations is presented and evaluated. Results show the validity of the model. Additionally, results for decoding-complexity constrained rate distortion optimization (RDC) using our model show the benefit of incorporating the computational power of client devices into the compression process.
In this paper,we propose a new class of nonbinary polar codes,where the symbol-level polarization is achieved by using a 2×2 q-ary matrix[10β1]as the *** bit-level code construction,some partially-frozen symbol...
详细信息
In this paper,we propose a new class of nonbinary polar codes,where the symbol-level polarization is achieved by using a 2×2 q-ary matrix[10β1]as the *** bit-level code construction,some partially-frozen symbols exist,where the frozen bits in these symbols can be used as activecheck bits to facilitate the *** encoder/decoder of the proposed codes has a similar structure to the original binary polar codes,admitting an easily configurable and flexible implementation,which is an obvious advantage over the existing nonbinary polar codes based on ReedSolomon(RS)codes.A low-complexitydecoding method is also introduced,in which only more competitive symbols are considered rather than the whole q symbols in the finite *** support high spectral efficiency,we also present,in addition to the single level coded modulation scheme with field-matched modulation order,a mixed multilevel coded modulation scheme with arbitrary modulation in order to trade off the latency against *** results show that our proposed nonbinary polar codes exhibit comparable performance with the RS4-based polar codes and outperform binary polar codes with low decoding latency,suggesting a potential application for future ultra-reliable and low-latency communications(URLLC).
暂无评论