Lossless compression of electroencephalograph (EEG) data is of great interest to the biomedical research community. Lossless compression through neural network is achieved by using the net as a predictor and coding th...
详细信息
Lossless compression of electroencephalograph (EEG) data is of great interest to the biomedical research community. Lossless compression through neural network is achieved by using the net as a predictor and coding the prediction error in a lossless manner. The predictive neural network uses a certain number of past samples to predict the present one and in most cases, the differences between the actual and predicted values are zero or close to zero. Entropy coding techniques such as Huffman and arithmetic coding are used in the second stage to achieve a high degree of compression. predictive coding schemes based on single- layer and multi-layer perceptron networks and recurrent network models are investigated in this paper. Compression results are reported for EEG's recorded under various clinical conditions. These results are compared with those obtained by using linear predictors such as FIR and lattice filters.
Security and QoS are two main issues for a successful wide deployment of multicast services. For instance, in a multicast streaming application, a receiver would require a data origin authentication service as well as...
详细信息
Security and QoS are two main issues for a successful wide deployment of multicast services. For instance, in a multicast streaming application, a receiver would require a data origin authentication service as well as a quality adaptation technique for the received stream. Signature propagation and layered multicast are efficient solutions satisfying these two requirements. In this paper we investigate the use of signature propagation to ensure data origin authentication service. We, then, propose a set of novel data origin authentication techniques for layered media-streaming video. In addition to data origin authentication, the proposed techniques offer continuous non-repudiation of the origin and data integrity. These techniques take advantage of the preestablished layered structure of the encoded video data to reduce the overhead and improve the overall verification in lossy network environments. We evaluate the performance of the proposed techniques through extensive simulations using NS2 simulator.
A computationally efficient block matching algorithm is presented to perform motion estimation of image sequences. The algorithm evaluates an objective function for all neighbouring blocks and stops, when no further i...
详细信息
A computationally efficient block matching algorithm is presented to perform motion estimation of image sequences. The algorithm evaluates an objective function for all neighbouring blocks and stops, when no further improvement can be achieved. The complexity of the algorithm is reduced significantly, as the objective function is calculated from the projections of the blocks along the horizontal and the vertical axis. Furthermore, the relationship between projections of the neighbouring blocks is utilized, so as to alleviate the need for fully calculating the projection vectors for each candidate block. The proposed algorithm is compared against the full search (FS), two-dimensional logarithmic search (2D LS), and block based gradient descent search (BBGDS), in terms of complexity and compression performance. Experimental results show that the proposed algorithm exhibits quite good performance at a significantly reduced computational complexity.
Addresses the question of how to extract the nonlinearities in speech with the prime purpose of facilitating coding of the residual signal in residual excited coders. The short-term prediction of speech in speech code...
详细信息
Addresses the question of how to extract the nonlinearities in speech with the prime purpose of facilitating coding of the residual signal in residual excited coders. The short-term prediction of speech in speech coders is extensively based on linear models, e.g. the linear predictive coding technique (LPC), which is one of the most basic elements in modern speech coders. This technique does not allow extraction of nonlinear dependencies. If nonlinearities are absent from speech the technique is sufficient, but if the speech contains nonlinearities the technique is inadequate. The authors give evidence for nonlinearities in speech and propose nonlinear short-term predictors that can substitute the LPC technique. The technique, called nonlinear predictive coding, is shown to be superior to the LPC technique. Two different nonlinear predictors are presented. The first is based on a second-order Volterra filter, and the second is based on a time delay neural network. The latter is shown to be the more suitable for speech coding applications.< >
Linear predictors for lossless data compression should ideally minimize the entropy of prediction errors. But in current practice predictors of least-square type are used instead. In this paper, we formulate and solve...
详细信息
ISBN:
(纸本)9781424412730;1424412730;1424412749
Linear predictors for lossless data compression should ideally minimize the entropy of prediction errors. But in current practice predictors of least-square type are used instead. In this paper, we formulate and solve the linear minimum-entropy predictor design problem as one of convex or quasiconvex programming. The proposed minimum-entropy design algorithms are derived from the well-known fact that prediction errors of most signals obey generalized Gaussian distribution. Empirical results and analysis are presented to demonstrate the superior performance of the linear minimum-entropy predictor over the traditional least-square counterpart for lossless coding.
We revisit the classic problem of developing a spatial correlation model for natural images and videos by proposing a conditional correlation model for relatively nearby pixels that is dependent upon five parameters. ...
详细信息
ISBN:
(纸本)9781424413973
We revisit the classic problem of developing a spatial correlation model for natural images and videos by proposing a conditional correlation model for relatively nearby pixels that is dependent upon five parameters. The conditioning is on local texture and the optimal parameters can be calculated for a specific image or video with a mean absolute error (MAE) usually smaller than 5%. We use this conditional correlation model to calculate the conditional rate distortion function when universal side information is available at both the encoder and the decoder. We demonstrate that this side information, when available, can save as much as 1 bit per pixel for selected videos at low distortions. We further study the scenario when the video frame is processed in macroblocks (MBs) or smaller blocks and calculate the rate distortion bound when the texture information is coded losslessly and optimal predictive coding is utilized to partially incorporate the correlation between the neighboring MBs or blocks.
We propose a new approach to context-based predictive coding of video, where the interframe or intraframe coding mode is adaptively selected on a pixel basis. We perform the coding mode selection using only the previo...
详细信息
ISBN:
(纸本)0780362977
We propose a new approach to context-based predictive coding of video, where the interframe or intraframe coding mode is adaptively selected on a pixel basis. We perform the coding mode selection using only the previously reconstructed samples which are also available at the decoder, so that any overhead information on the coding mode selection does not need to be transmitted to the decoder. The proposed coder also provides the lossless concatenated coding property when applied to multigeneration of video sequences since the same coding mode information is available at the second time encoding. The proposed coding mode selection enables the coder to easily incorporate error modeling and context modeling by performing the intraframe coding with one of the existing image coders such as the JPEG-LS standard. Experiments show that the proposed approach in conjunction with the JPEG-LS standard provides significant improvements in compression efficiency.
An image compression algorithm suitable for focal plane integration and its hardware implementation are presented. In this approach an image is progressively decomposed into images of lower resolution. The low resolut...
详细信息
ISBN:
(纸本)9781467302180
An image compression algorithm suitable for focal plane integration and its hardware implementation are presented. In this approach an image is progressively decomposed into images of lower resolution. The low resolution images are then used as the predictors of the higher resolution images. The prediction residuals are entropy encoded and compressed. This compression approach can provide lossless or lossy compression and the resulting bitstream is a fully embedded code. A switched-capacitor circuit is proposed to implement the required operations. A prototype has been implemented on a 0.5 μm CMOS process. Simulation and measurements results validating the proposed approach are reported.
Methods for object-based compression and composition of natural and synthetic video content are currently emerging in standards such as MPEG-4 and VRML. This paper describes novel techniques for compression of 2-D tri...
详细信息
Methods for object-based compression and composition of natural and synthetic video content are currently emerging in standards such as MPEG-4 and VRML. This paper describes novel techniques for compression of 2-D triangular mesh geometry and motion, enabling efficient representation and manipulation of video content. Specifically, mesh geometry is compressed by predictive coding of mesh node locations. Mesh node motion vectors are compressed by predictive techniques as well. Preliminary results show that the mesh data can be coded at a fraction of the bits used to code a typical video object.
predictive coding, also called Text Categorization, has been widely used in legal industry. By leveraging machine learning models such as logistic regression and SVM, the review of documents can be prioritized based o...
详细信息
ISBN:
(数字)9781728108582
ISBN:
(纸本)9781728108599
predictive coding, also called Text Categorization, has been widely used in legal industry. By leveraging machine learning models such as logistic regression and SVM, the review of documents can be prioritized based on their probability of relevance to the legal case, thus improving review efficiency and cutting cost. In recent years, deep learning models-combined with word embeddings-have shown better performance in predictive coding. However, deep learning models involve many parameters and it is challenging and time-consuming for legal practitioners to select appropriate settings. Based on the experiments on several public legal text datasets, this paper shows the preliminary results about how various key parameter settings impact the performance of Convolutional Neural Networks (CNNs).
暂无评论