Shape description and recognition is an important and interesting problem in scene analysis. The authors approach to shape description is a formal model of a shape consisting of a set of primitives, their properties, ...
详细信息
Shape description and recognition is an important and interesting problem in scene analysis. The authors approach to shape description is a formal model of a shape consisting of a set of primitives, their properties, and their interrelationships. The primitives are the simple parts and intrusions of the shape which can be derived through the graph-theoretic clustering procedure previously described. The interrelationships are two ternary relations on the primitives: the intrusion relation which relates two simple parts that join to the intrusion they surround and the protrusion relation which relates two intrusions to the protrusion between them. Using this model, a shape matching procedure that uses a tree search with look-ahead to find mappings from a prototype shape to a candidate shape has been developed.
In order to carry out an automatic procedure to analyze a sequence of images and extract in that way the dynamic characteristics of the motion represented in the sequence, a certain number of methods are, at present, ...
详细信息
In order to carry out an automatic procedure to analyze a sequence of images and extract in that way the dynamic characteristics of the motion represented in the sequence, a certain number of methods are, at present, considered and evaluated by many authors. In that context one of the more interesting aspects consists in obtaining the segmentation of each frame that could be considered ″normalized″ in a given interval of frames belonging to the sequence. A method to obtain a normalized segmentation for the special case considered, is presented. The technique is based on the iterated application of a pair of algorithms the first of which is able to identify the moving parts and to make a check on the rightness of such identification; the second one carries out the ″normalization″ of frames of the sequence on the basis of the results furnished by the first algorithm.
Sketch-based image retrieval (SBIR) has undergone an increasing interest in the community of computer vision bringing high impact in real applications. For instance, SBIR brings an increased benefit to eCommerce searc...
详细信息
ISBN:
(纸本)9781665448994
Sketch-based image retrieval (SBIR) has undergone an increasing interest in the community of computer vision bringing high impact in real applications. For instance, SBIR brings an increased benefit to eCommerce search engines because it allows users to formulate a query just by drawing what they need to buy. However, current methods showing high precision in retrieval work in a high dimensional space, which negatively affects aspects like memory consumption and time processing. Although some authors have also proposed compact representations, these drastically degrade the performance in a low dimension. Therefore in this work, we present different results of evaluating methods for producing compact embeddings in the context of sketch-based image retrieval. Our main interest is in strategies aiming to keep the local structure of the original space. The recent unsupervised local-topology preserving dimension reduction method UMAP fits our requirements and shows outstanding performance, improving even the precision achieved by SOTA methods. We evaluate six methods in two different datasets. We use Flickr15K and eCommerce datasets;the latter is another contribution of this work. We show that UMAP allows us to have feature vectors of 16 bytes improving precision by more than 35%.
Large images are becoming more and more common in earth resources monitoring, medical diagnosis and other applications. Often it would be helpful to work with only a subset of a large image since less space and time w...
详细信息
Large images are becoming more and more common in earth resources monitoring, medical diagnosis and other applications. Often it would be helpful to work with only a subset of a large image since less space and time would be required to process it. Subsets extracted according to semantic attributes have irregular shapes and as such are awkward to store and process. Irregular subsets can be covered with rectangular regions to simplify the regions to be stored and processed. Then the rectangular regions must be organized with an index. Here several covering methods are compared and indexing methods suggested. A surprizing result is that the sequential-greatest coverage heuristic can lead to arbitrarily bad coverings in some situations. However, this disadvantage can be overcome by combination with a tiling approach.
Segmentation of blood vessels and extraction of their centerlines in 3D angiography are essential to diagnosis and prognosis of vascular diseases, and advanced imageprocessing.and analysis. In this paper we propose a...
详细信息
ISBN:
(纸本)9781424423392
Segmentation of blood vessels and extraction of their centerlines in 3D angiography are essential to diagnosis and prognosis of vascular diseases, and advanced imageprocessing.and analysis. In this paper we propose a semiautomatic method to perform those two tasks simultaneously. A user supplies two end points to the algorithm and a vessel centerline between the two given points is extracted automatically. Local vessel widths are estimated as byproducts. Additional anchor points can be added in between to handle difficult situation. Our method is based upon a polygonal line algorithm. This algorithm is used to find principal curves, nonlinear generalization of principal components, from point clouds. We discuss an application of principal curve to vessel extraction from a theoretical viewpoint. A novel algorithm is then proposed for the application. No data interpolation is needed in the algorithm and centerlines extracted are adaptive to the vasculature complexity on account of their nonparametric representation. We have tested the method on two synthetic data sets and two clinical data sets. Results show that it has high robustness to variation in image resolution, voxel anisotropy and noise. Moreover centerlines obtained are in subvoxel precision and local widths estimated are accurate under limit of image resolution.
The recent increase in popularity of binary feature descriptors has opened the door to new lightweight computer vision applications. Most research efforts thus far have been dedicated to the introduction of new large-...
详细信息
ISBN:
(纸本)9781509014378
The recent increase in popularity of binary feature descriptors has opened the door to new lightweight computer vision applications. Most research efforts thus far have been dedicated to the introduction of new large-scale binary features, which are primarily used for keypoint description and matching. In this paper, we show that the side products of small-scale binary feature computations can efficiently filter images and estimate image gradients. The improved efficiency of low-level operations can be especially useful in time-constrained applications. Through our experiments, we show that efficient binary feature convolutions can be used to mimic various imageprocessing.operations, and even outperform Sobel gradient estimation in the edge detection problem, both in terms of speed and F-Measure.
image segmentation is one of the most important low-level operation in imageprocessing.and computer vision. It is unlikely for a single algorithm with a fixed set of parameters to segment various images successfully ...
详细信息
ISBN:
(纸本)9781509014378
image segmentation is one of the most important low-level operation in imageprocessing.and computer vision. It is unlikely for a single algorithm with a fixed set of parameters to segment various images successfully due to variations between images. However, it can be observed that the desired segmentation boundaries are often detected more consistently than other boundaries in the output of state-of-the-art segmentation results. In this paper, we propose a new approach to capture the consensus of information from a set of segmentations generated by varying parameters of different algorithms. The probability of a segmentation curve being present is estimated based on our probabilistic image segmentation model. A connectivity probability map is constructed and persistent segments are extracted by applying topological persistence to the probability map. Finally, a robust segmentation is obtained with the detection of certain segmentation curves guaranteed. The experiments demonstrate our algorithm is able to consistently capture the curves present within the segmentation set.
Future generation x-ray computed tomography scanners will be characterized by their ability to record simultaneously a sufficient number of x-ray projections to allow reconstructions of multiple adjacent cross section...
详细信息
Future generation x-ray computed tomography scanners will be characterized by their ability to record simultaneously a sufficient number of x-ray projections to allow reconstructions of multiple adjacent cross sections of the object under study. An ability to repeat the entire data collection procedure with great rapidity, allowing many scan passes per second, should encourage research and diagnostic studies of moving organs such as the heart and lungs in truly three dimensions and in real time. A combined series of algorithmic, software, special-purpose computer architecture, and hardware implementation studies have demonstrated significant progress toward computed tomography reconstruction processing.rates of 10**9 to 10**1**0 arithmetic operations per second.
Computational colour constancy tries to recover the colour of the scene illuminant of an image. Colour constancy algorithms can, in general, be divided into two groups: statistics-based approaches that exploit statist...
详细信息
ISBN:
(纸本)0769523722
Computational colour constancy tries to recover the colour of the scene illuminant of an image. Colour constancy algorithms can, in general, be divided into two groups: statistics-based approaches that exploit statistical knowledge of common lights and surfaces, and physics-based algorithms which are based on an understanding of how physical processes such as highlights manifest themselves in images. A combined physical and statistical colour constancy algorithm that integrates the advantages of the statistics-based Colour by Correlation method with those of a physics-based technique based on the dichromatic reflectance model is introduced In contrast to other approaches not only a single illuminant estimate is provided but a set of likelihoods for a given illumination set. Experimental results on the benchmark Simon Fraser image database show the combined method to clearly outperform purely statistical and purely physical algorithms.
In this paper, we present a system for detection, tracking and representation of tubular objects in images. The uniqueness of the proposed system is twofold: at the macro level, the novelty of the system lies in the i...
详细信息
ISBN:
(纸本)0818658258
In this paper, we present a system for detection, tracking and representation of tubular objects in images. The uniqueness of the proposed system is twofold: at the macro level, the novelty of the system lies in the integration of object localization and tracking using geometric properties;at the micro level, in the use of high and low level constraints to model the detection and tracking subsystem. The underlying philosophy for object detection is to extract perceptually significant features from the pixel level image, and then use these high level cues to refine the precise boundaries. In the case of tubular objects, the perceptually significant features are anti-parallel line segments or, equivalently, their axis of symmetries. The axis of symmetry infers a coarse description of the object in terms of a bounding polygon. The polygon then provides the necessary boundary condition for the refinement process, which is based on dynamic programming. For tracking the object in a time sequence of images, the refined contour is then projected onto each consecutive frame. In addition, the system provides an axis of symmetry representation of object for subsequent scientific analysis.
暂无评论