This paper concerns the optimization of EEG signal parameters for epileptic seizure detection. In a previous study, a macroscopic model has been used to model various waveforms of EEG signal and to optimize its parame...
详细信息
ISBN:
(纸本)9781467366748
This paper concerns the optimization of EEG signal parameters for epileptic seizure detection. In a previous study, a macroscopic model has been used to model various waveforms of EEG signal and to optimize its parameters by means of a genetic algorithm (GA). In the GA-based method for EEG parameters estimation, an optimization procedure is used. The aim of the optimization procedure is to minimize an objective function. The minimized error function compares the desired waveform (real EEG signal) and the waveform of the signal provided by the model both in the time domain and frequency domain. In the present study, we propose a time-scale based representation for the objective function as an alternative to the time and frequency based objective function used in the early study. The proposed objective function takes into account the non-stationary nature of the EEG signal. The performance of the proposed wavelet-based objective function is compared to that of the spectral objective function.
wavelet transform is a main tool for imageprocessingapplications in modern existence. A Double Density Dual Tree Discrete wavelet Transform is used and investigated for image denoising. images are considered for the...
wavelet transform is a main tool for imageprocessingapplications in modern existence. A Double Density Dual Tree Discrete wavelet Transform is used and investigated for image denoising. images are considered for the analysis and the performance is compared with discrete wavelet transform and the Double Density DWT. Peak signal to Noise Ratio values and Root Means Square error are calculated in all the three wavelet techniques for denoised images and the performance has evaluated. The proposed techniques give the better performance when comparing other two wavelet techniques.
In this paper a new dictionary learning algorithm is proposed. Similar to many dictionary learning algorithms, the proposed algorithm alternates between two stages. First, sparse coding stage uses the current dictiona...
详细信息
ISBN:
(纸本)9781479948734
In this paper a new dictionary learning algorithm is proposed. Similar to many dictionary learning algorithms, the proposed algorithm alternates between two stages. First, sparse coding stage uses the current dictionary to obtain the sparse representation coefficients. Herein, the orthogonal matching pursuit algorithm is used for sparse coding. Second, a dictionary update stage that employs the calculated coefficients to update the dictionary and is based on iterative least squares method. The autocorrelation and the cross correlation between the sparse coding coefficients and the training data are estimated recursively by applying a forgetting factor. The variable step size which depends on the forgetting factor and autocorrelation function is derived. The simulation results indicate that representation ability of dictionaries designed by the proposed method has improved SNR compared to those designed with existing state of the art algorithms with faster convergence. Preliminary results for single image super-resolution are promising.
Skin color segmentation is important for several imageprocessing, and computer vision applications. But, the accuracy of a color-based skin detection method is affected by the presence of some skin-like colors in the...
详细信息
ISBN:
(纸本)9781467393393
Skin color segmentation is important for several imageprocessing, and computer vision applications. But, the accuracy of a color-based skin detection method is affected by the presence of some skin-like colors in the background regions. So, probabilistic approaches are more suitable for the skin detection as compared to hard decision-based approaches. A Skin Probability Map (SPM) of an image provides the probability of a pixel belonging to skin region. It is observed that the accuracy of a SPM-based skin detection method also depend on the chosen colorspace for the SPM. In this paper, a novel Weighted Skin Probability Map (WSPM) is proposed for the skin color segmentation. The WSPM is represented as a weighted sum of the SPMs obtained from different color spaces. Experimental results based on standard databases show that replacing the single colorspace-based SPMs with the proposed WSPM can reduce the overall detection errors significantly.
volumetric video (vv) pipelines reached a high level of maturity, creating interest to use such content in interactive visualisation scenarios. vv allows real world content to be captured and represented as 3D models,...
ISBN:
(数字)9781728159652
ISBN:
(纸本)9781728159669
volumetric video (vv) pipelines reached a high level of maturity, creating interest to use such content in interactive visualisation scenarios. vv allows real world content to be captured and represented as 3D models, which can be viewed from any chosen viewpoint and direction. Thus, vv is ideal to be used in augmented reality (AR) or virtual reality (vR) applications. Both textured polygonal meshes and point clouds are popular methods to represent vv. Even though the signal and imageprocessing community slightly favours the point cloud due to its simpler data structure and faster acquisition, textured polygonal meshes might have other benefits such as better visual quality and easier integration with computer graphics pipelines. To better understand the difference between them, in this study, we compare these two different representation formats for a vv compression scenario utilising state-of-the-art compression techniques. For this purpose, we build a database and collect user opinion scores for subjective quality assessment of the compressed vv. The results show that meshes provide the best quality at high bitrates, while point clouds perform better for low bitrate cases. The created vv quality database will be made available online to support further scientific studies on vv quality assessment.
The article discusses the application of wavelet analysis for the time-frequency time-delay estimation. The proposed algorithm is wavelet transform-based cross-correlation time delay estimation that applies discrete t...
The article discusses the application of wavelet analysis for the time-frequency time-delay estimation. The proposed algorithm is wavelet transform-based cross-correlation time delay estimation that applies discrete time wavelet transform to filter the input signal prior to computation of cross-correlation function. The distinguishing feature of the algorithm that it uses the variation of continuous wavelet transform to process the discrete signals instead of dyadic wavelet transform that is normally applied to the case. Another feature that the implication of convolution theorem is used to compute coefficients of the wavelet transform. This makes possible to omit redundant discrete Fourier transforms and significantly reduce the computational complexity. The principal applicability of the proposed method is shown in the course of a computational experiments with artificial and real-world signal. So the method demonstrated expected selectivity for the signals localized in the different frequency bands. The application of the method to practical case of pipeline leak detection was also successful. However, the study concluded that this method provides no specific advantages in comparison with the conventional one. In the future, alternative applications in biological signalprocessing will be considered.
Biomedical waveforms, such as electrocardiogram (ECG), always posses a lot of important clinical information in medicine and are usually recorded in a long period of time in the application of telemedicine. Due to the...
详细信息
ISBN:
(纸本)9781467324199
Biomedical waveforms, such as electrocardiogram (ECG), always posses a lot of important clinical information in medicine and are usually recorded in a long period of time in the application of telemedicine. Due to the huge amount of data to compress the ECG is vital. This paper evaluates the compression performance and characteristics of zerotree coding compression schemes of ECG applications. Two methods, namely the Embedded Zerotree wavelet (EZW) and the Set Partitioning In Hierarchical Tree (SPIHT) are proposed. The EZW is one of the first algorithms to show the full power of wavelet based image compression. The SPIHT algorithm is a highly refined version of the EZW algorithm. EZW and SPIHT have achieved notable success in still image coding. We modified these algorithms for applied it to compression of ECG data. Both methodologies were evaluated using the percent root mean square difference (PRD) and the Compression Ratio (CR). Theoretical results are contrasted with a simulation study with actual ECG signals from MIT-BIH arrhythmia database. The simulation results show that the both methods achieve a very significant improvement in the performances of compression ratio and error measurement for ECG, as compared with some other compression methods.
This work proposes a fault detection procedure for Self-Excited Induction Generator (SEIG) in power generation standalone applications. The fault detection methodology proposed is based on digital signalprocessing te...
详细信息
ISBN:
(纸本)9781467324199
This work proposes a fault detection procedure for Self-Excited Induction Generator (SEIG) in power generation standalone applications. The fault detection methodology proposed is based on digital signalprocessing techniques applied to the available SEIG electrical signals. In order to get the enough data to perform the analysis, the dynamic model of SEIG was simulated in Matlab/Simulink platform. The simulated models describe the machine behavior under healthy and several faulty conditions. The main characteristics of the signals were obtained by means of the Continuous wavelet Transform and the Global wavelet Spectrum applied on the stator voltages of the generator. A particular pattern, for each operating condition, was observed and quantified. Consequently, the fault detection technique is characterized by a low computational cost for practical implementation.
An efficient wavelet-based algorithm to reconstruct non-square/non-cubic signals from gradient data is proposed. This algorithm is motivated by applications such as image or video processing in the gradient domain. In...
An efficient wavelet-based algorithm to reconstruct non-square/non-cubic signals from gradient data is proposed. This algorithm is motivated by applications such as image or video processing in the gradient domain. In some earlier approaches, the non-square/non-cubic gradients were extended to enable a square/cubic Haar wavelet decomposition and the coarsest resolution subband was derived from the mean value of the signal. In this paper, a non-square/non-cubic wavelet decomposition is obtained directly without extending the gradient data. The challenge comes from finding the coarsest resolution subband of the wavelet decomposition and an algorithm to compute this is proposed. The performance of the algorithm is evaluated in terms of accuracy and computation time, and is shown to outperform the considered earlier approaches in a number of cases. Further, a closer look on the role of the coarsest resolution subband coefficients reveals a trade-off between errors in reconstruction and visual quality which has interesting implications in image and video processingapplications.
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memo...
详细信息
ISBN:
(纸本)9781479975921
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) increases programmer productivity when implementing hybrid workflows that scale to multi-core and multi-GPU systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. We present an implementation of hybrid microscopy image stitching using HTGS that reduces code size by ≈ 25% and shows favorable performance compared to a similar hybrid workflow implementation without HTGS. The HTGS-based implementation reuses the computational functions of the hybrid workflow implementation.
暂无评论