To solve some problems of data storage and transmission related to the systems for automatic interpretation of the electrocardiogram (ECG) various classes of reversible and irreversible compression algorithms have bee...
详细信息
To solve some problems of data storage and transmission related to the systems for automatic interpretation of the electrocardiogram (ECG) various classes of reversible and irreversible compression algorithms have been tested on the database produced by CSE. Errors of the reconstructed signal have been evaluated by means of PRD, RMS and other synthetic parameters together with the visual analysis by a cardiologist. Tracing rebuilt from the compressed signal has been analyzed using the MEANS program of automatic interpretation, and diagnostic answers with the related measurements have been compared with the results obtained on the original signal. Inconsistent issues have been reexamined by a cardiologist to determine the artifacts and the type and amount of information lost because of compression. The results define the limits of the different classes of compression methods in relation to the accuracy expected in the diagnosis.< >
Recently, there has been an interest in increasing the capacity of storage systems through the use of information lossless data compression. Several algorithms are being investigated and implemented. One such algorith...
详细信息
Recently, there has been an interest in increasing the capacity of storage systems through the use of information lossless data compression. Several algorithms are being investigated and implemented. One such algorithm is the Lempel-Ziv data compression algorithm. A new data compression algorithm is presented based on the original Lempel-Ziv algorithm and bounds are derived showing the improved Lempel-Ziv algorithm's performance as compared to the original Lempel-Ziv algorithm's performance. In addition, both algorithms are implemented in software and are compared with each other as well as with the Lempel-Ziv-Welch data compression algorithm.
We compare ECG data compression algorithms based on signal entropy for a given mean-square-error (MSE) compression distortion. By defining the distortion in terms of the MSE and assuming the ECG signal to be a Gaussia...
详细信息
We compare ECG data compression algorithms based on signal entropy for a given mean-square-error (MSE) compression distortion. By defining the distortion in terms of the MSE and assuming the ECG signal to be a Gaussian process we are able to estimate theoretical rate distortion bounds from average ECG power spectra. These rate distortion bounds give estimates of the minimum bits per second (bps) required for storage of ECG data with a given MSE regardless of compression method. From average power spectra of the MIT/BIH arrhythmia database we have estimated rate distortion bounds for ambulatory ECG data, both before and after average beat subtraction. These rate distortion estimates indicate that, regardless of distortion, average beat subtraction reduces the theoretical minimum data rate required for ECG storage by approximately 100 bits per second (bps). Our estimates also indicate that practical ambulatory recording requires a compression distortion on the order of 11-mu-V rms. We have compared the performance of common ECG compression algorithms on data from the MIT/BIH database. We sampled and quantized the data to give distortion levels of 2, 5, 8, 11, and 14-mu-V rms. These results indicate that, when sample rates and quantization levels are chosen for optimal rate distortion performance, minimum data rates can be achieved by average beat subtraction followed by first differencing of the residual signal. Achievable data rates approximate our theoretical estimates at low distortion levels and are within 60 bps at higher distortion levels.
We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other ...
详细信息
We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other compression methods with respect to data rate as a function mean-squared-error distortion. Based on previous results we selected a sample rate of 100 samples per second and a quantization step size of 35-mu-V. These selections allow storage of 24 h of two-channel ECG data in 4 Mbytes of memory with a minimum rms distortion. For this sample rate and quantization level, we show that estimation of beat location and quantizer location can significantly affect compression performance. Improved compression resulted when beats were located with a temporal resolution of 5 ms and coarse quantization was performed in the compression loop. For the 24-h MIT/BIH arrhythmia database our compression algorithm coded a single-channel of ECG data with an average data rate of 174 bits per second.
It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the data base is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the sourc...
详细信息
It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the data base is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the source is stationary and satisfies certain conditions (e.g., a finite-order Markov source).
A compression technique which preserves edges in compressed pictures is developed. It is desirable to build edge preservation characteristics in compression methods since many applications in engineering and vision de...
详细信息
A compression technique which preserves edges in compressed pictures is developed. It is desirable to build edge preservation characteristics in compression methods since many applications in engineering and vision depend on edge information. In this paper we present a compression algorithm which adapts itself to the local nature of the image. Smooth regions are represented by their averages and edges are preserved using quad trees. Textured regions are encoded using BTC (block truncation coding) and a modification of BTC using look-up tables. We developed the latter (BTC with look-up tables) so as 1) to improve on the compression ratio of BTC and 2) to leave the visual quality of compression exactly the same as that of BTC. A threshold using a range which is the difference between the maximum and the minimum grey levels in a 4 x 4 pixel quadrant is used. At the recommended value of the threshold (equal to 18), the quality of the compressed textured regions is very high, the same as that of AMBTC (absolute moment block truncation coding) but the edge preservation quality is far superior to that of AMBTC. This significant improvement is achieved at compression levels (1.1-1.2 b/pixel) which are better than that of AMBTC (1.63). compression levels below (0.5-0.8) b/pixel may be achieved. The high quality of edge preservation of this method does not change at these low compression levels or as the threshold value changes. However, a postfilter is needed to improve the blocky appearance in the smooth and textured regions.
A vector quantization compression system is presented which is suitable for use in commercial applications, i.e., efficient enough to encode a wide variety of images and simple enough to decode the images in real time...
详细信息
A vector quantization compression system is presented which is suitable for use in commercial applications, i.e., efficient enough to encode a wide variety of images and simple enough to decode the images in real time using software (for machine compatibility). A fixed-rate code with unbalanced tree structure is used, and a method of unbalanced tree growing is extended. Simple prediction techniques are applied to improve coded image quality.< >
A new compression scheme is described that aims at improving the fidelity of reconstructed images through the parallel application of both lossless and lossy compression techniques. The purpose of the scheme is to obt...
详细信息
A new compression scheme is described that aims at improving the fidelity of reconstructed images through the parallel application of both lossless and lossy compression techniques. The purpose of the scheme is to obtain compression ratios higher than those obtained by lossless compression schemes and at the same time produce reconstructed images with better fidelity than those normally obtained with lossy techniques. Tests so far have shown that the integrated lossless/lossy (IL/L) compression scheme consistently improves the fidelity of reconstructed images compared to well-known image compression algorithms. Specifically, the new scheme gives better fidelity (20% to 55% reduction in mean square error) than the DCT algorithm under equal compression ratios.< >
The problem of efficient image compression through neural networks (NNs) is addressed. Some theoretical results on the application of 2-layer linear NNs to this problem are given. Two more elaborate structures, based ...
详细信息
The problem of efficient image compression through neural networks (NNs) is addressed. Some theoretical results on the application of 2-layer linear NNs to this problem are given. Two more elaborate structures, based on a set of NNs, are further presented; they are shown to be very efficient while remaining computationally rather simple.< >
暂无评论