We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other ...
详细信息
We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other compression methods with respect to data rate as a function mean-squared-error distortion. Based on previous results we selected a sample rate of 100 samples per second and a quantization step size of 35-mu-V. These selections allow storage of 24 h of two-channel ECG data in 4 Mbytes of memory with a minimum rms distortion. For this sample rate and quantization level, we show that estimation of beat location and quantizer location can significantly affect compression performance. Improved compression resulted when beats were located with a temporal resolution of 5 ms and coarse quantization was performed in the compression loop. For the 24-h MIT/BIH arrhythmia database our compression algorithm coded a single-channel of ECG data with an average data rate of 174 bits per second.
Progressive encoding of a signal generally involves an estimation step, designed to reduce the entropy of the residual of an observation over the entropy of the observation itself. Oftentimes the conditional distribut...
详细信息
Progressive encoding of a signal generally involves an estimation step, designed to reduce the entropy of the residual of an observation over the entropy of the observation itself. Oftentimes the conditional distributions of an observation, given already-encoded observations, are well fit within a class of symmetric and unimodal distributions (e.g., the two-sided geometric distributions in images of natural scenes, or symmetric Paretian distributions in models of financial data). It is common practice to choose an estimator that centers, or aligns, the modes of the conditional distributions, since it is common sense that this will minimize the entropy, and hence the coding cost of the residuals. But with the exception of a special case, there has been no rigorous proof. Here we prove that the entropy of an arbitrary mixture of symmetric and unimodal distributions is minimized by aligning the modes. The result generalizes to unimodal and rotation-invariant distributions in R(n). We illustrate the result through some experiments with natural images.
A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable ...
详细信息
A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.
Multi-spectral and hyperspectral image data payloads have large size and may be challenging to download from remote sensors. To alleviate this problem, such images can be effectively compressed using specially designe...
详细信息
Multi-spectral and hyperspectral image data payloads have large size and may be challenging to download from remote sensors. To alleviate this problem, such images can be effectively compressed using specially designed algorithms. The new CCSDS-123 standard has been developed to address onboard lossless coding of multi-spectral and hyperspectral images. The standard is based on the fast lossless algorithm, which is composed of a causal context-based prediction stage and an entropy-coding stage that utilizes Golomb power-of-two codes. Several parts of each of these two stages have adjustable parameters. CCSDS-123 provides satisfactory performance for a wide set of imagery acquired by various sensors;but end-users of a CCSDS-123 implementation may require assistance to select a suitable combination of parameters for a specific application scenario. To assist end-users, this paper investigates the performance of CCSDS-123 under different parameter combinations and addresses the selection of an adequate combination given a specific sensor. Experimental results suggest that prediction parameters have a greater impact on the compression performance than entropy-coding parameters. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
This paper introduces two novel algorithms for the 2-bit adaptive delta modulation, namely 2-bit hybrid adaptive delta modulation and 2-bit optimal adaptive delta modulation. In 2-bit hybrid adaptive delta modulation,...
详细信息
This paper introduces two novel algorithms for the 2-bit adaptive delta modulation, namely 2-bit hybrid adaptive delta modulation and 2-bit optimal adaptive delta modulation. In 2-bit hybrid adaptive delta modulation, the adaptation is performed both at the frame level and the sample level, where the estimated variance is used to determine the initial quantization step size. In the latter algorithm, the estimated variance is used to scale the quantizer codebook optimally designed assuming Laplace distribution of the input signal. The algorithms are tested using speech signal and compared to constant factor delta modulation, continuously variable slope delta modulation and instantaneously adaptive 2-bit delta modulation, showing that the proposed algorithms offer higher performance and significantly wider dynamic range.
This paper presents a secure reconfigurable hierarchical hardware architecture at the pixel and region level for smart image sensors to accelerate machine vision applications. The design maintains hierarchical process...
详细信息
This paper presents a secure reconfigurable hierarchical hardware architecture at the pixel and region level for smart image sensors to accelerate machine vision applications. The design maintains hierarchical processing that begins at the pixel level. It aims to reduce the computational burden on the sequential processor and increases the confidentiality of the sensor. We achieve this goal by preprocessing the data in parallel with event-based processing within the sensor and extract the local features, which are then forwarded to an encryption module. After that, an external processor can obtain the encrypted features to complete the vision application. This approach significantly accelerates the vision application by executing the low-level and mid-level image processing applications and simultaneously by reducing the data volume at the sensor level. The secure hardware architecture enables the vision application to perform in real-time with reliability. This hierarchical processing breaks the traditional sequential image processing and introduces parallelism for machine vision applications. We evaluate the design in FPGA and achieve the GDSII file in the ASIC platform at 800MHz. Simulation results show that the area overhead and power penalty for adding reconfiguration features stay in an acceptable range. Besides, removing redundant information, 84.01%, and 94.31% dynamic power can be saved at each pixel-level and region-level, respectively.
Proponents of the predictive processing (PP) framework often claim that one of the framework's significant virtues is its unificatory power. What is supposedly unified are predictive processes in the mind, and the...
详细信息
Proponents of the predictive processing (PP) framework often claim that one of the framework's significant virtues is its unificatory power. What is supposedly unified are predictive processes in the mind, and these are explained in virtue of a common prediction error-minimisation (PEM) schema. In this paper, I argue against the claim that PP currently converges towards a unified explanation of cognitive processes. Although the notion of PEM systematically relates a set of posits such as 'efficiency' and 'hierarchical coding' into a unified conceptual schema, neither the frameworks' algorithmic specifications nor its hypotheses about their implementations in the brain are clearly unified. I propose a novel way to understand the fruitfulness of the research program in light of a set of research heuristics that are partly shared with those common to Bayesian reverse engineering. An interesting consequence of this proposal is that pluralism is at least as important as unification to promote the positive development of the predictive mind.
A generalised block-matching motion estimation with variable-size blocks is proposed. The location and size of each block are determined from a quad-tree spatial decomposition algorithm. Experimental results with head...
详细信息
A generalised block-matching motion estimation with variable-size blocks is proposed. The location and size of each block are determined from a quad-tree spatial decomposition algorithm. Experimental results with head-and-shoulders test image sequences show that motion-compensated errors with this method have lower entropy than the conventional fixed or variable-size block-matching techniques. The quality of the coded pictures under the proposed method with an H.261 codec, both subjectively and in terms of PSNR, outperforms the conventional use of fixed and variable block-size motion estimators.
Two components for image compression which provide almost all the computational power required by today's efficient codecs for full motion image compression are presented. The first component computes 8*8 discrete...
详细信息
Two components for image compression which provide almost all the computational power required by today's efficient codecs for full motion image compression are presented. The first component computes 8*8 discrete-cosine transform and zigzag conversion of coefficient scanning for a pixel rate up to 27 MHz. The second component computes full-search motion estimation for a pixel rate up to 18 MHz. System implementation for image compression is discussed.< >
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of descriptio...
详细信息
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differ-entiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
暂无评论