We propose a practical FEC code rate decision scheme based on jointsource-channel distortion model. The conventional FEC code rate decision schemes using analytical sourcecoding distortion model and channel-induced ...
详细信息
ISBN:
(纸本)9781479934324
We propose a practical FEC code rate decision scheme based on jointsource-channel distortion model. The conventional FEC code rate decision schemes using analytical sourcecoding distortion model and channel-induced distortion model are usually complex, and typically employ the process of model parameter training which involves potentially high computational complexity and implementation cost. Since the proposed joint model is expressed as a simple closed form and has a small number of scene dependent model parameters, a video sender using the model can be easily implemented. It is shown by simulations that the proposed method can accurately estimate the optimal FEC code rate with low computational complexity.
We consider a communication system in which the outputs of a Markov source are encoded and decoded in real-time by a finite memory receiver, and the distortion measure does not tolerate delays. The objective is to cho...
详细信息
ISBN:
(纸本)9781604234916
We consider a communication system in which the outputs of a Markov source are encoded and decoded in real-time by a finite memory receiver, and the distortion measure does not tolerate delays. The objective is to choose designs, i.e. realtime encoding, decoding and memory update strategies that minimize a total expected distortion measure. This is a dynamic team problem with non-classical information structure [7]. We use the structural results of [4] to develop a sequential decomposition for the finite and infinite horizon problems. Thus, we obtain a systematic methodology for the determination of jointly optimal encoding decoding and memory update strategies for real-time point-to-point communication.
In this paper, we present a novel algorithm for dynamic quantization in distributed Wyner-Ziv video coding. In contrast with previous work where the quantization parameter is fixed and a feedback channel is used, our ...
详细信息
In this paper, we present a novel algorithm for dynamic quantization in distributed Wyner-Ziv video coding. In contrast with previous work where the quantization parameter is fixed and a feedback channel is used, our proposed technique relies on theoretical calculations to jointly determine the number of quantization levels along with a suitable compression rate for each video frame. It employs a cross-layer approach that dynamically allocates unequal transmission rates for different users by taking into account the amount of motion in the captured video scenes on one hand and the transmission conditions for each sensor on the other. The application of this algorithm in a wireless video sensor network shows a significant improvement in the system performance when compared to a traditional system that allocates equal channel resources with a fixed quantization parameter.
In this thesis, different aspects of lattice-based precoding and decoding for the transmission of digital and analog data over MIMO fading channels are investigated:1) Lattice-based precoding in MIMO broadcast systems...
详细信息
In this thesis, different aspects of lattice-based precoding and decoding for the transmission of digital and analog data over MIMO fading channels are investigated:1) Lattice-based precoding in MIMO broadcast systems:A new viewpoint for adopting the lattice reduction in communication over MIMO broadcast channels is introduced. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. The new viewpoint helps us to generalize the idea of lattice-reduction-aided precoding for the case of unequal-rate transmission, and obtain analytic results for the asymptotic behavior of the symbol-error-rate for the lattice-reduction-aided precoding and the perturbation technique. Also, the outage probability for both cases of fixed-rate users and fixed sum-rate is analyzed. It is shown that the lattice-reduction-aided method, using LLL algorithm, achieves the optimum asymptotic slope of symbol-error-rate (called the precoding diversity).2) Lattice-based decoding in MIMO multiaccess systems and MIMO point-to-point systems:Diversity order and diversity-multiplexing tradeoff are two important measures for the performance of communication systems over MIMO fading channels. For the case of MIMO multiaccess systems (with single-antenna transmitters) or MIMO point-to-point systems with V-BLAST transmission scheme, it is proved that lattice-reduction-aided decoding achieves the maximum receive diversity (which is equal to the number of receive antennas). Also, it is proved that the naive lattice decoding (which discards the out-of-region decoded points) achieves the maximum diversity in V-BLAST systems. On the other hand, the inherent drawbacks of the naive lattice decoding for general MIMO fading systems is investigated. It is shown that using the naive lattice decoding for MIMO systems has considerable deficiencies in terms of the diversity-multiplexing tradeoff. Unlike the case of maximum-likelihood decod
This paper proposes a new wireless video communication scheme to achieve high-efficiency video transmission over noisy channels. It exploits the idea of model division multiple access (MDMA) and extracts common semant...
详细信息
channel-optimized Index Assignment (IA) of source codewords is the simple but effective approach of improving the error resilience of the communication systems. Although IA is a type of jointsourcechannelcoding (JS...
详细信息
ISBN:
(纸本)9781509027118
channel-optimized Index Assignment (IA) of source codewords is the simple but effective approach of improving the error resilience of the communication systems. Although IA is a type of jointsourcechannelcoding (JSCC), it does not intervene with the source codec design. So, in addition to the fact that this method can be used in designing systems effectively, it can be also applied to the existing system. In the past, several methods and algorithms were developed for the IA problem. Among these IA algorithms, Simulated Annealing (SA) algorithm is the most effective and popular algorithm with fast convergence time, wide use in optimization problems in general and issue of IA problem in particular and has been continuously improved. In this paper, we focus on the MSA algorithm, one of the latest improved versions of SA for the IA problem and improved the MSA algorithm by using the mechanism of TS algorithm in order to avoid repeated searches. We also proposed the modified structure Tabu list applied to the new algorithm in order to extend the searching space and to enhance the algorithm performance. The effectiveness of the proposed method is confirmed through experiments.
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not ...
详细信息
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channelcoding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show
that up to 3 dB MSE gain can be obtained by changing the power allocation on the
information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits
have the same power, and the two power levels can be different.
We propose a forward error correction scheme for asynchronous sampling via level crossing (LC) sampling and time encoding, where the dominant errors consist of pulse deletions and insertions, and where encoding is req...
详细信息
We propose a forward error correction scheme for asynchronous sampling via level crossing (LC) sampling and time encoding, where the dominant errors consist of pulse deletions and insertions, and where encoding is required to take place in an instantaneous fashion. For LC sampling the presented scheme consists of a combination of an outer systematic convolutional code, an embedded inner marker code, and power-efficient frequency-shift keying modulation at the sensor node. Decoding is first obtained via a maximum a-posteriori (MAP) decoder for the inner marker code, which achieves synchronization for the insertion and deletion channel, followed by MAP decoding for the outer convolutional code. Besides investigating the rate trade-off between marker and convolutional codes, we also show that residual redundancy in the asynchronously sampled source signal can be successfully exploited in combination with redundancy only from a marker code. This provides a low complexity alternative for deletion and insertion error correction compared to using explicit redundancy. For time encoding, only the pulse timing is of relevance at the receiver, and the outer channel code is replaced by a quantizer to represent the relative position of the pulse timing. Numerical simulations show that LC sampling outperforms time encoding in the low to moderate signal-to-noise ratio regime by a large margin. (c) 2020 Elsevier B.V. All rights reserved.
In this paper, we describe a method for fast determination of distortion-based optimal unequal error protection (UEP) of bitstreams generated by embedded image coders and transmitted over memoryless noisy channels. Th...
详细信息
In this paper, we describe a method for fast determination of distortion-based optimal unequal error protection (UEP) of bitstreams generated by embedded image coders and transmitted over memoryless noisy channels. The UEP problem is reduced to the more general problem of finding a path in a graph, where each path of the graph represents a possible protection policy, with the objective of selecting the best path being that one inducing minimal distortion. The problem is combinatorially complex and excludes a brute force approach. The solution is provided by, applying heuristic information from the problem domain to reduce search complexity. In particular, we use graph search procedure A suggested by Hart et al., well known in the field of artificial intelligence, to avoid exhaustive search. Numerical results show that this technique outperforms the method presented by Hamzaoui et al., in terms of Mean Square Error (MSE) distortion and computational complexity. After testing our solution using analytical models of the operational distortion curves proposed by Charfi et al., we implement a transmission architecture that, using the actual distortion values generated by a real embedded coder, computes the optimal protection policy for the considered image, protects the packets, and transmits them over a channel.
In order to prove a key result for their development (Lemma 2), Taubman and Thic need the assumption that the upper boundary of the convex hull of the channelcoding probability-redundancy characteristic is sufficient...
详细信息
In order to prove a key result for their development (Lemma 2), Taubman and Thic need the assumption that the upper boundary of the convex hull of the channelcoding probability-redundancy characteristic is sufficiently dense. Since a floor value for the density level for which the claim to hold is not specified, it is not clear whether their lemma applies to practical situations. In this correspondence, we show that the constraint of sufficient density can be removed, and, thus, we validate the conclusion of the lemma for any scenario encountered in practice.
暂无评论