A new algorithm is proposed to compress animated 3D mesh models. First, an input mesh model is partitioned into segments, and each segment is motion compensated from that of the previous time instance. Then. the motio...
详细信息
A new algorithm is proposed to compress animated 3D mesh models. First, an input mesh model is partitioned into segments, and each segment is motion compensated from that of the previous time instance. Then. the motion residuals are effectively compressed by using a transform coding method. It is shown that the proposed algorithm yields a much higher compression ratio than the MPEG-4 codec.
A number of High Dynamic Range (HDR) video compression algorithms proposed to date have either been developed in isolation or only-partially compared with each other. Previous evaluations were conducted using quality ...
详细信息
A number of High Dynamic Range (HDR) video compression algorithms proposed to date have either been developed in isolation or only-partially compared with each other. Previous evaluations were conducted using quality assessment error metrics, which for the most part were developed for qualitative assessment of Low Dynamic Range (LDR) videos. This paper presents a comprehensive objective and subjective evaluation conducted with six published HDR video compression algorithms. The objective evaluation was undertaken on a large set of 39 HDR video sequences using seven numerical error metrics namely: PSNR, logPSNR, puPSNR, puSSIM, Weber MSE, HDR-VDP and HDR-VQM. The subjective evaluation involved six short-listed sequences and two ranking-based subjective experiments with hidden reference at two different output bitrates with 32 participants each, who were tasked to rank distorted HDR video footage compared to an uncompressed version of the same footage. Results suggest a strong correlation between the objective and subjective evaluation. Also, non-backward compatible compression algorithms appear to perform better at lower output bit rates than backward compatible algorithms across the settings used in this evaluation. (C) 2016 Elsevier B.V. All rights reserved.
Simple object- or pixel-based facies models use facies proportions as the constraining input parameter to be honored in the output model. The resultant interconnectivity of the facies bodies is an unconstrained output...
详细信息
Simple object- or pixel-based facies models use facies proportions as the constraining input parameter to be honored in the output model. The resultant interconnectivity of the facies bodies is an unconstrained output property of the modelling, and if the objects being modelled are geometrically representative in three dimensions, commonly-available methods will produce well-connected facies when the model net:gross ratio exceeds about 30%. Geological processes have more degrees of freedom, and facies in high net:gross natural systems often have much lower connectivity than can be achieved by object-based or common implementations of pixel-based forward modelling. The compression method decouples facies proportion from facies connectivity in the modelling process and allows systems to be generated in which both are defined independently at input. The two-step method first generates a model with the correct connectivity but incorrect facies proportions using a conventional method, and then applies a geometrical transform to scale the model to the correct facies proportions while retaining the connectivity of the original model. The method, and underlying parameters, are described and illustrated using examples representative of low and high connectivity geological systems.
The nondestructive characteristics of ?-photon imaging technology make it attractive potential in the industry. However, in industrial detection with a large detection range and high resolution, iteration method, the ...
详细信息
The nondestructive characteristics of ?-photon imaging technology make it attractive potential in the industry. However, in industrial detection with a large detection range and high resolution, iteration method, the image reconstruction algorithm which is most widely used, faces the challenge of an overly large system matrix, and the current compression algorithms using the geometric symmetry of the positron emission tomography (PET) system have problems of complex pixel division and recovery mode. Therefore, this study proposes a lossless compression and linear recovery algorithm of the system matrix based on a polar adaptive pixel (LCLR-PAP). Based on the structure of the detection ring and rotation of the circle, the detection field of view (FOV) is designed as a cylinder and the circular slice is divided into several sectors. The pixels are adaptively divided within the sector to realize the lossless compression of the system matrix from the structure, and based on which the angle change of pixels can be converted to matrix transformation to achieve linear recovery. A partial pixel partition is optimized to compensate for the unevenness of the pixel size in the center of the adaptive image. Experiments show that the LCLR-PAP algorithm can provide an efficient solution to the large-scale system matrix compression recovery problem, that is, through a simple and convenient adaptive pixel division with matrix sparsity and axial symmetry, the system matrix can be compressed to less than 100,000th of the original, and realize the lossless compression and fast linear recovery.
Modern microprocessors have used microcode as a way to implement legacy (rarely used) instructions, add new ISA features and enable patches to an existing design. As more features are added to processors (e.g. protect...
详细信息
Modern microprocessors have used microcode as a way to implement legacy (rarely used) instructions, add new ISA features and enable patches to an existing design. As more features are added to processors (e.g. protection and virtualization), area and power costs associated with the microcode memory increased significantly. A recent Intel internal design targeted at low power and small footprint has estimated the costs of the microcode ROM to approach 20% of the total die area (and associated power consumption). Moreover, with the adoption of multicore architectures, the impact of microcode memory size on the chip area has become relevant, forcing industry to revisit the microcode size problem. A solution to address this problem is to store the microcode in a compressed form and decompress it at runtime. This paper describes techniques for microcode compression that achieve significant area and power savings, while proposes a streamlined architecture that enables high throughput within the constraints of a high performance CPU. The paper presents results for microcode compression on several commercial CPU designs which demonstrates compression ratios ranging from 50 to 62%. In addition, it proposes techniques that enable the reuse of (pre-validated) hardware building blocks that can considerably reduce the cost and design time of the microcode decompression engine in real-world designs.
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-...
详细信息
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal compression Dissimilarity), NCD (Normalized compression Dissimilarity) and CD (compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results: We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis,
In order to improve the compression ratio,most of improved compression methods require more memory and CPU *** these improvements are not suitable for embedded system with limited-resource,especially for vehicle commu...
详细信息
ISBN:
(纸本)9781479919819
In order to improve the compression ratio,most of improved compression methods require more memory and CPU *** these improvements are not suitable for embedded system with limited-resource,especially for vehicle communications *** order to communicate realtimely by lower costs,the huge and short data are compressed before *** paper discusses several original lossless compression methods including their principles,features,advantages and disadvantages *** on the front vehicle system show that the LZ77,PPM and BWT are more suitable for the automobile communication data,and they have better compression ratios,simpler coding,lower CPU and memory costs.
Background: With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such...
详细信息
Background: With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results: RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: ( 1) present a robust and effective way for RNA structural data compression;( 2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion: A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Exten
JPEG-similar algorithms are proposed for compressing video information. These algorithms are based on the ART neural network that realizes the vector quantization operation. The results of modelling the proposed algor...
详细信息
JPEG-similar algorithms are proposed for compressing video information. These algorithms are based on the ART neural network that realizes the vector quantization operation. The results of modelling the proposed algorithms in the Matlab environment testify to the possibility of using them for image compression. These algorithms are shown to be good enough for images with recurrent segments.
An efficient preprocessing technique of arranging an electroencephalogram ( EEG) signal in matrix form is proposed for real-time lossless EEG compression. The compression algorithm consists of an integer lifting wavel...
详细信息
An efficient preprocessing technique of arranging an electroencephalogram ( EEG) signal in matrix form is proposed for real-time lossless EEG compression. The compression algorithm consists of an integer lifting wavelet transform as the decorrelator with set partitioning in hierarchical trees as the source coder. Experimental results show that the preprocessed EEG signal gave improved rate-distortion performance, especially at low bit rates, and less encoding delay compared to the conventional one-dimensional compression scheme.
暂无评论