Hundreds of thousands of echo data are collected in nuclear magnetic resonance (NMR) logging. In order to get the formation information, such as porosity, permeability, fluid type, fluid saturation, pore size distribu...
详细信息
Hundreds of thousands of echo data are collected in nuclear magnetic resonance (NMR) logging. In order to get the formation information, such as porosity, permeability, fluid type, fluid saturation, pore size distribution, etc., those NMR data need to be inversed. Generally, compression is implemented to the gathered significant amounts of NMR echo data before they are inversed to reduce the inversion computation. This paper puts forward a new kind of NMR echo data compression method based on the principle of principal component analysis (PCA). Aiming at losing the minimum information, original echo data were compressed by retaining those who contribute the largest amounts of information for reflecting the formation characteristics, and eliminating those who contribute little or even are redundant. One-dimensional and two-dimensional NMR echo data were simulated, and then compressed, respectively, using the PCA method. The NMR echo data before and after PCA compression were inversed respectively, and the inversion results of compressed and uncompressed were compared. The result showed that the PCA method could be used to compress the NMR echo data without losing much information even under a high compression ratio.
With the increase of radar signal bandwidth and the application of RF directing sampling technology, wideband radars are facing greater pressures for data transmission, storage, and processing. Based on the relationsh...
详细信息
With the increase of radar signal bandwidth and the application of RF directing sampling technology, wideband radars are facing greater pressures for data transmission, storage, and processing. Based on the relationship between linear frequency modulation (LFM for short) signal and LFM-stepped signal, synthesis method of high-resolution range profile (HRRP for short) for LFM step signal can be applied to LFM signal. Combining the synthesis method of HRRP for LFM signal with compressed sensing theory, this study proposes a new compressionmethod for wideband radar echo with LFM signal style based on random segment selection and analyses the constraint conditions of the proposed data compression method. Finally, the validity and practicability of the proposed data compression method are verified from two aspects: simulation experiment and actual data processing.
Discrete-time stochastic processes generating elements of either a finite set (alphabet) or a real line interval are considered. Problems of estimating limiting (or stationary) probabilities and densities are consider...
详细信息
Discrete-time stochastic processes generating elements of either a finite set (alphabet) or a real line interval are considered. Problems of estimating limiting (or stationary) probabilities and densities are considered, as well as classification and prediction problems. We show that universal coding (or datacompression) methods can be used to solve these problems.
Array operations are useful in a lot of scientific codes. In recent years, several applications, such as the geological analysis and the medical images processing, are processed using array operations for three-dimens...
详细信息
Array operations are useful in a lot of scientific codes. In recent years, several applications, such as the geological analysis and the medical images processing, are processed using array operations for three-dimensional (abbreviate to "3D") sparse arrays. Due to the huge computation time, it is necessary to compress 3D sparse arrays and use parallel computing technologies to speed up sparse array operations. How to compress the sparse arrays efficiently is an important task for practical applications. Hence, in this paper, two strategies, inter- and intra-task parallelization (abbreviate to "ETP" and "RTP"), are presented to compress 3D sparse arrays, respectively. Each strategy was designed and implemented on Intel Xeon and Xeon Phi, respectively. From experimental results, the ETP strategy achieves 17.5 and 18.2 speedup ratios based on Intel Xeon E5-2670 v2 and Intel Xeon Phi SE10X, respectively;4.5 and 4.5 speedup ratios for the RTP strategy based on these two environments, respectively.
A fan data compression method is presented that tripled laser printer speed for Bode and simulation plots with many points. This reduced printer delays without the expense of a faster laser printer, and it saved compu...
详细信息
A fan data compression method is presented that tripled laser printer speed for Bode and simulation plots with many points. This reduced printer delays without the expense of a faster laser printer, and it saved computer time as well. The authors describe their problem, solution, and conclusions. They give the fan algorithm and present its performance for several applications. They include a pseudocode implementation.
Array operations are useful in a large number of important scientific codes, such as molecular dynamics, finite-element methods, climate modeling, etc. It is a challenging problem to provide an efficient data distribu...
详细信息
Array operations are useful in a large number of important scientific codes, such as molecular dynamics, finite-element methods, climate modeling, etc. It is a challenging problem to provide an efficient data distribution for irregular problems. Multi-dimensional (AM) sparse array operations can be used in atmosphere and ocean sciences, image processing, etc., and have been an extensively investigated problem. In our previous work, a data distribution scheme, Encoding-Decoding (ED), was proposed for two-dimensional (2D) sparse arrays. In this paper, ED is extended to be useful for MD sparse arrays first. Then, the performance of ED is compared with that of Send Followed Compress (SFC) and Compress Followed Send (CFS). Both theoretical analysis and experimental tests were conducted and then shown that ED is superior to SFC and CFS for all of evaluated criteria.
Quantitative analysis of noisy electron spectrum images requires a robust estimation of the underlying background signal. We demonstrate how modern data compression methods can be used as a tool for achieving an analy...
详细信息
Quantitative analysis of noisy electron spectrum images requires a robust estimation of the underlying background signal. We demonstrate how modern data compression methods can be used as a tool for achieving an analysis result less affected by statistical errors or to speed up the background estimation. In particular, we demonstrate how a multilinear singular value decomposition (MLSVD) can be used to enhance elemental maps obtained from a complex sample measured with energy electron loss spectroscopy. Furthermore, the usage of vertex component analysis (VCA) for a basis vector centered estimation of the background is demonstrated. Arising computational benefits in terms of model accuracy and computational costs are studied. (C) 2017 Elsevier B.V. All rights reserved.
The crucial residues of hBaxBH3 peptide for interaction with hBcl-B, an anti-apoptotic protein, were identified using molecular docking studies on the polypeptides and temperature-specific molecular dynamic simulation...
详细信息
The crucial residues of hBaxBH3 peptide for interaction with hBcl-B, an anti-apoptotic protein, were identified using molecular docking studies on the polypeptides and temperature-specific molecular dynamic simulations performed for the protein-peptide complex at near-physiological conditions (pH 7.0, 1 atmospheric pressure and 0.1 M NaCl). The data from the methods were examined by a 'strong residue contacts' filter strategy and the data analyses of the former and latter methods identified 10 (Q52, K57, S60, L63, K64, R65, G67, D68, D71 & S72) and 3 (S60, E61 & K64) crucial residues of the hBaxBH3 peptide for interacting with the protein, respectively. We have herein demonstrated that BH3-chemical mimetics screened using the pharmacophoric residues of hBaxBH3 obtained from the 'peptidodynmimetic method' were superior in terms of ligand efficiencies, bioavailability and pharmacokinetic properties vis-a-vis that of small molecule BH3-mimetics retrieved using the conventional 'peptidomimetic method'. The unique advantages of the 'peptidodynmimetic method' to identify efficient BH3-mimetics for modulating interfaces (composed of a large number of amino acids) of other anti-apoptotic proteins-BH3-only peptides have also been discussed in detail.
In this paper, a multilead ECG data compression method is presented. First, a linear transform is applied to the standard ECG lead signals which are highly correlated with each other. In this way a set of uncorrelated...
详细信息
In this paper, a multilead ECG data compression method is presented. First, a linear transform is applied to the standard ECG lead signals which are highly correlated with each other. In this way a set of uncorrelated transform domain signals is obtained. Then, resulting transform domain signals are compressed using various coding methods, including multirate signal processing and transform domain coding techniques.
Laser scanning has been widely used in as-built 3D modeling for construction and infrastructure management. When laser scanners can produce dense point clouds that capture cm-level features in minutes, efficient 3D po...
详细信息
ISBN:
(数字)9780784480823
ISBN:
(纸本)9780784480823
Laser scanning has been widely used in as-built 3D modeling for construction and infrastructure management. When laser scanners can produce dense point clouds that capture cm-level features in minutes, efficient 3D point cloud processing, storage, and visualization remain challenging due to gigabytes of 3D imagery data. A practical solution is to compress the point cloud while keeping enough geometric information (e.g., edges, corners). Existing compressionmethod uses uniform sampling that reduces data densities everywhere without considering that geometric complexities of different parts of a scene may deserve varying data densities. However, limited studies quantitively assess the deviations between pre-and post-compression point clouds. This research examines a laser-scanning data compression method that enables automatic compression of a point cloud with varying subsampling rates per geometric complexities of parts of data and a user-specified compression ratio (the ratio of the removed point cloud to the original point cloud, the segment with the highest value is the most compressed). Compared with qualitative methods, this method quantifies the relationship between conserved geometric details, subsampling rates, and geometric complexities. First, the developed approach takes segmented point clouds as inputs (either manual or automatic segmentation results) that contain data segments of significantly different geometric complexities depending on the shapes underlying the data segments. Next, surface smoothness and curvature are calculated to quantify the geometric complexities of point cloud segments. Then per total compression ratio, the developed approach assigns various sub-compression ratios to the segments per their geometry complexities (e.g., complicated segments are assigned higher compression ratio to keep more data and vice versa). Finally, this approach evaluates compression results by accessing deviations between the original and compressed point c
暂无评论