This paper presents a comparison study of Gaussian Mixture Models for fingerprints image duplication and analysis. It also presents a new probabilistic Parametric Gaussian Mixture Model(GMM). The system is built aroun...
详细信息
Optical coherence tomography (OCT) is a non-invasive optical imaging modality capable of high resolution imaging of internal tissue structures. It is widely believed that the high axial resolution in OCT systems requi...
详细信息
Optical coherence tomography (OCT) is a non-invasive optical imaging modality capable of high resolution imaging of internal tissue structures. It is widely believed that the high axial resolution in OCT systems requires a wide-bandwidth light source. As a result, often the potential advantages of narrow-bandwidth sources (in terms of cost and/or imaging speed) are understood to come at the cost of significant reduction in imaging resolution. In this paper, we argue that this trade-off between resolution and speed is a shortcoming imposed by the-state-of-the-art A-scan reconstruction algorithm, Fast Fourier Transform, and can be circumvented through use of alternative processing methods. In particular, we investigate the shortcomings of the FFT as well as previously proposed alternatives and demonstrate the first application of an iterative regularized re-weighted l(2) norm method to improve the axial resolution of fast scan rate OCT systems in the narrow-bandwidth imaging conditions. We validate our claims via experimental results generated from a home-built OCT system used to image layered phantom and in vivo data. Our results rely on new, sophisticated signal processingalgorithms to generate higher precision (i.e., higher resolution) OCT images at correspondingly fast scan rates. In other words, our work demonstrates the feasibility of simultaneously more reliable and more comfortable medical imaging systems for patients by reducing the overall scan time, without sacrificing image quality. (C) 2016 Optical Society of America
Medical image enhancement is an effective tool to improve visual quality of digital medical images. However, conventional linear image enhancement methods often suffers from problems such as over-enhancement and noise...
详细信息
ISBN:
(纸本)9781509018970
Medical image enhancement is an effective tool to improve visual quality of digital medical images. However, conventional linear image enhancement methods often suffers from problems such as over-enhancement and noise sensitivity. In this paper, we study nonlinear arithmetic frameworks designed to solve the common problems of linear enhancement methods, namely, LIP, PLIP and GLIP. We also introduce nonlinear unsharp masking algorithms based on the logarithmic imageprocessing models for medical image enhancement. Experiments are conducted to evaluate and compare the performance of the methods.
This study presents a fast-efficient error concealment method for recovering information related to shape. The proposed technique comprises block classification, edge direction interpolation and filtering interpolatio...
详细信息
This study presents a fast-efficient error concealment method for recovering information related to shape. The proposed technique comprises block classification, edge direction interpolation and filtering interpolation. Missing blocks are classified into four categories: transparent, opaque, edge and isolated blocks. Most of the computation is spent on edge blocks and isolated blocks to maximise the cost and performance tradeoff. For the recovery of edge blocks, the edge slope is computed by referring to the nearest available block, from which the missing shape is interpolated parallel to the edge. Isolated blocks are dealt with using a cascade filter to approximate the actual shape. Experimental results show that the proposed method provides better cost performance in the restoration of shapes than that afforded by comparable algorithms, both in numerical parameters and the resulting shapes. The processing speed is approximately two to three times faster than previous methods and low computational load makes the proposed technique applicable to real-time MPEG-4 systems.
A holographic data storage system(HDSS) is very important field in the storage system device. Many researchers study the HDSS about imageprocessing algorithm for reduction of image noise. In this work, we proposed an...
详细信息
ISBN:
(纸本)9780791849880
A holographic data storage system(HDSS) is very important field in the storage system device. Many researchers study the HDSS about imageprocessing algorithm for reduction of image noise. In this work, we proposed an intelligence virtual mask, parameter values of virtual image mask generated using DNA coding method, it is available to decrease the IPI noise in HDSS. In this paper, an intensity distribution of laser beam in our HDSS is controlled by the virtual mask with an intelligence algorithm. The virtual mask value is changed arbitrarily in real-time with suggested DNA coding method in the HDSS.
Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different ...
详细信息
Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying imageprocessing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the imageprocessing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.
It is accepted that image compression and transmission are essential in the imageprocessing system and that errors occurred may introduce the degradation on the quality of the received image over the wireless transmi...
详细信息
It is accepted that image compression and transmission are essential in the imageprocessing system and that errors occurred may introduce the degradation on the quality of the received image over the wireless transmission for intelligent Internet of things (IoT). image quality assessment metric (QAM) is of fundamental significance to various imageprocessingsystems, and the goal of studying the QAM is to design an algorithm that can automatically evaluate the quality of the received image at the terminal display equipment in a perceptible way under the ubiquitous network circumstances. In this paper, focusing on the image compression and transmission, a joint full-reference (FR) QAM (JQAM) for evaluating the quality of a 3-D image is proposed based on the state-of-the-art physiological and psychological properties of the human visual system (HVS) for image transmission under the wireless networks. The major technical contribution of this paper is that the binocular perception (depth perception) and local image properties are taken into consideration. First, the luminance masking property and local image content information are calculated to establish image QAM (IQAM) to improve the abilities of current quality evaluation algorithms. Meanwhile, the information abstracted from the depth map is also utilized as side information (SI) to evaluate the depth map on quality assessment. Finally, IQAM and SI are combined to construct the proposed JQAM. Experimental results show that the proposed metric could achieve better correlation with the subjective quality scores compared with the relevant existing IQAMs, and it could be used to evaluate the quality of the 3-D image signal of the intelligent equipment for multimedia communication systems in IoT.
image filtering, regardless of whether it is denoising (low pass filtering) or edge detection (high pass filtering) can be considered as a machine learning problem. In fact, filtering is a process of approximation of ...
详细信息
Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of u...
详细信息
Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of underwater images have been developed. Unfortunately, no well-accepted objective measure exists that can evaluate the quality of underwater images similar to human perception. Predominant underwater imageprocessingalgorithms use either a subjective evaluation, which is time consuming and biased, or a generic image quality measure, which fails to consider the properties of underwater images. To address this problem, a new nonreference underwater image quality measure (UIQM) is presented in this paper. The UIQM comprises three underwater image attribute measures: the underwater image colorfulness measure (UICM), the underwater image sharpness measure (UISM), and the underwater image contrast measure (UIConM). Each attribute is selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs). The experimental results demonstrate that the measures effectively evaluate the underwater image quality in accordance with the human perceptions. These measures are also used on the AirAsia 8501 wreckage images to show their importance in practical applications.
With the easy availability of imageprocessing and image editing tools, the cases of forgery have been raised in the last few years. Now days it is very difficult for a viewer and judicial authorities to verify authen...
详细信息
With the easy availability of imageprocessing and image editing tools, the cases of forgery have been raised in the last few years. Now days it is very difficult for a viewer and judicial authorities to verify authenticate a digital image. Cloning or copy-move technique is widely used as forgery to conceal the desired object. To hide various type of forgery like Splicing (compositing), cloning (copy-move) etc., various post-forgery techniques like blurring, intensity variation, noise addition etc. are applied. To overcome the mentioned difficulty, a forgery detection tool must comprise of several detection algorithms which work collaboratively to detect all the possible alterations and provide a single decision. This paper presents a universal tool comprising PCA, DWT, DWT-DCT, DWT-DCTSVD, DFT, DCT, DWT-DCT (QCD) techniques used for reduction, feature vector calculation and thus detecting forgery. Due to varied, erroneous, heterogeneous output of different reduction methods, it is very difficult to recognize the pre-processing done with available various classification systems. A fuzzy inference system has been developed to authenticate, find extend of forgery, parameters of forged area, robustness and accuracy of all the 7 detection tools, and the type of processing done on tempered image. Experimental results have shown that our classification system achieves accuracy of 94.12% as regards subjection to transformations like Blurring, Intensity Variation and Gaussian Noise Addition, JPEG compression, normal forgery (other random transformations). Two different membership functions are taken in this fuzzy system and different if-then rules are defined for classification of different types of pre-processing performed on the image.
暂无评论