This paper addresses the problem of defining a scale measure for digital images, that is, the problem of assigning a meaningful scale information to each pixel. We propose a method relying on the set of level lines of...
详细信息
ISBN:
(纸本)9783642022555
This paper addresses the problem of defining a scale measure for digital images, that is, the problem of assigning a meaningful scale information to each pixel. We propose a method relying on the set of level lines of an image, the so-called topographic map. We make use of the hierarchical structure of level lines to associate a level line to each pixel, enabling the computation of local scales. This computation is made under the assumption that blur is constant over the image, and therefore adapted to the case of satellite images. We then investigate the link between the proposed definition of local scale and recent methods relying on total variation diffusion. Eventually, we perform various experiments illustrating the spatial accuracy of the proposed approach.
We wish to recover an image corrupted by blur and Gaussian or impulse noise, in a variational framework. We use two data-fidelity terms depending oil the noise, and several local and nonlocal regularizers. Inspired by...
详细信息
ISBN:
(纸本)9783642022555
We wish to recover an image corrupted by blur and Gaussian or impulse noise, in a variational framework. We use two data-fidelity terms depending oil the noise, and several local and nonlocal regularizers. Inspired by Buades-Coll-Morel, Gilboa-Osher, and other nonlocal models, we propose nonlocal versions of the Ambrosio-Tortorelli and Shah approximations to Mumford-Shah-like regularizing functionals, with applications to image deblurring in the presence of noise. In the case of impulse noise model, we propose a necessary preprocessing step for the computation of the weight function. Experimental results show that these nonlocal MS regularizers yield better results than the corresponding local ones (proposed for deblurring by Bar et al.) in both noise models;moreover, these perform better than the nonlocal total variation in the presence of impulse noise. Characterization of minimizers is also given.
Adding external knowledge improves the results for ill-posed problems. In this paper we present a new multi-level optimization framework for image registration when adding landmark constraints oil the transformation. ...
详细信息
ISBN:
(纸本)9783642022555
Adding external knowledge improves the results for ill-posed problems. In this paper we present a new multi-level optimization framework for image registration when adding landmark constraints oil the transformation. Previous approaches are based oil a fixed discretization and lack of allowing for continuous landmark positions that are not oil grid points. Our novel approach overcomes these problems such that we can apply multi-level methods which have been proven being crucial to avoid local minima in the course of optimization. Furthermore, for our numerical method we are able to use constraint elimination such that we trace back the landmark constrained problem to a unconstrained optimization leading to all efficient algorithm.
Segmentation of images is often posed as a variational problem. As such, it is solved by formulating an energy functional depending on a contour and other image derived terms. The solution of the segmentation problem ...
详细信息
ISBN:
(纸本)9783642022555
Segmentation of images is often posed as a variational problem. As such, it is solved by formulating an energy functional depending on a contour and other image derived terms. The solution of the segmentation problem is the contour which extremizes this functional. The standard way of solving this optimization problem is by gradient descent search in the solution space, which typically suffers from many unwanted local optima and poor convergence. Classically, these problems have been circumvented by modifying the energy functional. In contrast, the focus of this paper is on alternative methods for optimization. Inspired by ideas from the machine learning community, we propose segmentation based on gradient descent with momentum. Our results show that typical models hampered by local optima solutions can be further improved by this approach. We illustrate the performance improvements using the level set framework.
Classical ways to denoise images contaminated with multiplicative noise (e.g. speckle noise) are filtering, statistical (Bayesian) methods, variationalmethods andmethods that convert the multiplicative noise into ad...
详细信息
ISBN:
(纸本)9783642022555
Classical ways to denoise images contaminated with multiplicative noise (e.g. speckle noise) are filtering, statistical (Bayesian) methods, variationalmethods andmethods that convert the multiplicative noise into additive noise (using a logarithmic function) in order to apply a shrinkage estimation for the log-image data and transform back the result using an exponential function. We propose a new method that involves several stages: we apply a reasonable under-optimal hard-thresholding on the curvelet transform of the log-image;the latter is restored using a specialized hybrid variational method combining an l(1) data-fitting to the thresholded coefficients and a Total Variation regularization (TV) in the image domain;the restored image is an exponential of the obtained minimizer, weighted so that the mean of the original image is preserved. The minimization stage is realized using a properly adapted fast Douglas-Rachford splitting. The existence of a minimizer of our specialized criterion and the convergence of the minimization scheme are proved. The obtained numerical results outperform the main alternative methods.
Measurements in nanoscopic imaging suffer from blurring effects concerning different point spread functions (PSF). Some apparatus even have PSFs that are locally dependent. oil phase shifts. Additionally, raw data are...
详细信息
ISBN:
(纸本)9783642022555
Measurements in nanoscopic imaging suffer from blurring effects concerning different point spread functions (PSF). Some apparatus even have PSFs that are locally dependent. oil phase shifts. Additionally, raw data are affected by Poisson noise resulting from laser sampling and "photon counts" in fluorescence microscopy. In these applications standard reconstruction methods (EM, filtered back projection) deliver unsatisfactory and noisy results. Starting from a statistical modeling in terms of a MAP likelihood estimation we combine the iterative EM algorithm with TV regularization techniques to make an efficient, use of a-priori information. Typically, TV-based methods deliver reconstructed cartoon-images suffering from contrast reduction. We propose an extension to EM-TV, based on Bregman iterations and inverse scalespacemethods, in order to obtain improved imaging results by simultaneous contrast enhancement. We illustrate our techniques by synthetic and experimental biological data.
In this paper a framework for defining scale-spaces, based on the computational geometry concepts of a-shapes, is proposed. In this approach, objects (curves or surfaces) of increasing convexity are computed by select...
详细信息
ISBN:
(纸本)9783642022555
In this paper a framework for defining scale-spaces, based on the computational geometry concepts of a-shapes, is proposed. In this approach, objects (curves or surfaces) of increasing convexity are computed by selective sub-sampling, from the original shape to its convex hull. The relationships with the Empirical Mode Decomposition (EMD), the curvature motion-based scale-space and some operators from mathematical morphology, are studied. Finally, we address the problem of additive image/signal decomposition in fluorescence video-microscopy. An image sequence is mainly considered as a collection of 1D temporal signals, each pixel being associated with its temporal intensity variation.
According to Marr's paradigm of computational vision the first process is an extraction of relevant features. The goal of this paper is to quantify and characterize the information carried by features using image-...
详细信息
According to Marr's paradigm of computational vision the first process is an extraction of relevant features. The goal of this paper is to quantify and characterize the information carried by features using image-structure measured at feature-points to reconstruct images. In this way, we indirectly evaluate the concept of feature-based image analysis. The main conclusions are that (i) a reasonably low number of features characterize the image to such a high degree, that visually appealing reconstructions are possible, (ii) different feature-types complement each other and all carry important information. The strategy is to define metamery classes of images and examine the information content of a canonical least informative representative of this class. Algorithms for identifying these are given. Finally, feature detectors localizing the most informative points relative to different complexity measures derived from models of natural image statistics, are given.
In previous work we studied left-invariant diffusion on the 2D Euclidean motion group for crossing-preserving coherence-enhancing diffusion on 2D images. In this paper we study the equivalent three-dimensional case. T...
详细信息
ISBN:
(纸本)9783642022555
In previous work we studied left-invariant diffusion on the 2D Euclidean motion group for crossing-preserving coherence-enhancing diffusion on 2D images. In this paper we study the equivalent three-dimensional case. This is particularly useful for processing High Angular Resolution Diffusion Imaging (HARDI) data, which earl be considered as 3D orientation scores directly. A complicating factor in 3D is that all practical 3D orientation scores are functions on a coset space of the 3D Euclidean motion group instead of on the entire group. We show that, conceptually, we can still apply operations on the entire group by requiring the operations to be a-right-invariant. Subsequently, we propose to describe the local structure of the 3D orientation score using left-invariant derivatives and we smooth 3D orientation scores using left-invariant diffusion. Finally, we show a number of results for linear diffusion on artificial HARDI data.
In this work we present a new variational approach for image registration where part of the data is only known on a low-dimensional manifold. Our work is motivated by navigated liver surgery. Therefore, we need to reg...
详细信息
ISBN:
(纸本)9783642022555
In this work we present a new variational approach for image registration where part of the data is only known on a low-dimensional manifold. Our work is motivated by navigated liver surgery. Therefore, we need to register 3D volumetric CT data and tracked 2D ultrasound (US) slices. The particular problem is that the set of all US slices does not assemble a full 3D domain. Other approaches use so-called compounding techniques to interpolate a 3D volume from the scattered slices. Instead of inventing new data by interpolation here we only use the given data. Our variational formulation of the problem is based on a standard approach. We minimize a joint functional made up from a distance term and a regularizer with respect to a 3D spatial deformation field. In contrast to existing methods we evaluate the distance of the images only on the two-dimensional manifold where the data is known. A crucial point here is regularization. To avoid kinks and to achieve a smooth deformation it turns out that at least second order regularization is needed. Our numerical method is based on Newton-type optimization. We present. a detailed discretization and give some examples demonstrating the influence of regularization. Finally we show results for clinical data.
暂无评论