The purpose of this study is to use modern image segmentation techniques to quantitate cyst area and number within a complete CT examination of the lungs. Lymphangioleiomyomatosis (LAM) was chosen because this disease...
详细信息
ISBN:
(纸本)081941462X
The purpose of this study is to use modern image segmentation techniques to quantitate cyst area and number within a complete CT examination of the lungs. Lymphangioleiomyomatosis (LAM) was chosen because this disease produces many well defined thin- walled cysts of varying sizes throughout the lungs that provide a good test for 2D image segmentation techniques, which are used to separate LAM cysts from the normal lung tissue. Quantitative measures of the lung, such as cyst area versus frequency, are then automatically extracted. Three women with LAM were examined using CT slices obtained at 20 mm intervals, with 1 to 1.5 mm collimation, and a pixel size of 0.4 - 0.5 mm. Our segmentation algorithm operates in several stages. First, masks for each lung are automatically generated, thus allowing only lung pixels to be considered for the cyst segmentation. Next, we threshold the data under the masks at a level of -900 Hounsfield units. The threshold segments LAM cysts from normal lung tissue and other structures, such as pulmonary veins and arteries. In order to determine the size of individual cysts, we grow all regions having brightness values lower than the threshold within the masked regions. These regions, which correspond to cysts, are then sorted by size, and a cyst histogram for each patient is computed.
Optimal openings are considered for extraction of signal from noise in the random binary union-noise model. Disjointness of signal and noise is not assumed, nor are grains within the signal or within the noise assumed...
详细信息
ISBN:
(纸本)081941624X
Optimal openings are considered for extraction of signal from noise in the random binary union-noise model. Disjointness of signal and noise is not assumed, nor are grains within the signal or within the noise assumed to be disjoint. There is a constraint on the overlapping, but this reflects the manner in which binary granular images are derived from gray-scale images of touching objects. The method assumes that the degraded image is segmented by the binary watershed algorithm and that an optimal opening by reconstruction must be found to remove segmented noise grains while passing segmented signal grains.
This paper proposes a massively parallel line feature extraction technique for 2D images. This new scheme uses a modified Hough transform implemented in a massively parallel fashion to extract the line features in an ...
详细信息
ISBN:
(纸本)081941543X
This paper proposes a massively parallel line feature extraction technique for 2D images. This new scheme uses a modified Hough transform implemented in a massively parallel fashion to extract the line features in an input image. The algorithm is based on the recursive decomposition technique. A parallel Hough transform detects line segments in the subimages of the input image. A bottom up approach then merges these line segments into longer lines. A pointerless tree structure is utilized to store feature information at various levels of the merging process. The line segment merging process is equivalent to climbing the tree representing the line features in the entire image. Techniques for line feature merging and balancing of features, tradeoffs between determination of line properties and computation, and algorithmic complexity are addressed in detail.
Individual cluster validation has not received as much attention as partition validation. This paper presents two measures for evaluating individual clusters in a fuzzy partition. They both account for properties of t...
详细信息
ISBN:
(纸本)081941462X
Individual cluster validation has not received as much attention as partition validation. This paper presents two measures for evaluating individual clusters in a fuzzy partition. They both account for properties of the fuzzy memberships as well as the structure of the data. The first measure is a ratio between compactness and separation of the fuzzy clusters; the second is based on counting a contradiction between properties of the fuzzy memberships and the stucture of the data. These two measures are applied and compared in evaluating fuzzy clusters generated by the fuzzy c-means algorithm for segmentation of magnetic resonance images of the brain.
Computation of ventricular volume and the diagnostic quantities like ejection-fraction ratio, heart output, mass, etc. requires detection of myocardial boundaries. The problem of segmenting an image into separate regi...
详细信息
ISBN:
(纸本)081941462X
Computation of ventricular volume and the diagnostic quantities like ejection-fraction ratio, heart output, mass, etc. requires detection of myocardial boundaries. The problem of segmenting an image into separate regions is one of the most significant problems in vision. Terzopoulos et al., have proposed an approach to detect the contour regions of complex shapes, assuming a user selected an initial contour not very far from the desired solution. We propose an optimal dynamic programming (DP) based method to detect contours. It is exact and not iterative. We first consider a list of uncertainty for each point selected by the user, wherein the point is allowed to move. Then, a search window is created from two consecutive lists. We then apply a dynamic programming (DP) algorithm to obtain the optimal contour passing through these lists of uncertainty, optimally utilizing the given information. For tracking, the final contour obtained at one frame is sampled and used as initial points for the next frame. Then, the same DP process is applied. We have demonstrated the algorithms on natural objects in a large spectrum of applications, including interactive segmentation of the regions of interest in medical images.
This paper describes the Bayesian image reconstruction algorithm with entropy prior with space variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of hig...
详细信息
ISBN:
(纸本)081941493X
This paper describes the Bayesian image reconstruction algorithm with entropy prior with space variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of high and low signal/noise ratio, thus avoiding the large residuals present in algorithms that use a constant balancing parameter. The space variant hyperparameter determines the relative weight between the prior information and the likelihood defining the degree of smoothness of the solution. To compute the variable hyperparameter we used a segmentation technique based on artificial neural networks of the Self- Organizing Map type. Using this technique we segmented the image in 25 regions and computed a different value of the hyperparameter for each one. We applied the method to the Hubble Space Telescope Cameras and to ground based CCD data.
To reach higher compression ratios in video sequence coding the demands posed on the motion estimation module become greater and greater. In this paper we present a motion estimation scheme which gives a motion field ...
详细信息
ISBN:
(纸本)0819414778
To reach higher compression ratios in video sequence coding the demands posed on the motion estimation module become greater and greater. In this paper we present a motion estimation scheme which gives a motion field defined on every block (typical 8 by 8 or 16 by 16) and resembles very well the real motion in the scene. In more complex coding schemes constraints were added to obtain better visual quality. The proposed algorithm is a `one-pass' algorithm and is not explicitly based on any statistical model. A classification procedure on a predefined number of motion vector candidates defines the final motion field. An overview is given of the concept of using global information to calculate the motion vector field. A thorough description of the new algorithm is given and some simulation results are presented on real sequences. Further research on this algorithm will be done to use it in a segmentation based codec for complex scenes.
In this paper we introduce a new algorithm for segmentation of medical images of any dimension. The segmentation is based on geometric methods and multiscale analysis. A sequence of increasingly blurred images is crea...
详细信息
ISBN:
(纸本)081941462X
In this paper we introduce a new algorithm for segmentation of medical images of any dimension. The segmentation is based on geometric methods and multiscale analysis. A sequence of increasingly blurred images is created by Gaussian blurring. Each blurred image is segmented by locating its ridges, decomposing the ridges into curvilinear segments and assigning a unique label to each, and constructing a region for each ridge segment based on a flow model which uses vector fields naturally associated with the ridge finding. The regions from the initial image are leaf nodes in a tree. The regions from the blurred images are interior nodes of the tree. Arcs of the tree are constructed based on how regions at one scale merge via blurring into regions at the next scale. Objects in the image are represented by unions and differences of subtrees of the full tree. The tree is used as input to a visualization program which allows the user to interactively explore the hierarchy and define objects. Some results are provided for a 3D magnetic resonance image of a head.
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem ...
详细信息
ISBN:
(纸本)081941462X
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
This paper describes a method for off-line recognition of handprinted and cursive words. The module takes as input a binary word image and a lexicon of strings, and ranks the lexicon according to the likelihood of mat...
详细信息
ISBN:
(纸本)081941476X
This paper describes a method for off-line recognition of handprinted and cursive words. The module takes as input a binary word image and a lexicon of strings, and ranks the lexicon according to the likelihood of match to the given word image. To perform recognition, a set of character models is used. The models employ a graph representation. Each character model consists of a set of features in spatial relationship to one another. The character models are automatically built in a clustering process. Character merging is performed by finding the appropriate correspondences between pairs of character sample features. This is accomplished by finding a solution to the assignment problem, which is an O(n3) linear programming algorithm. The end result of the training process is a set of random graph character prototypes for each character class. Because it is not possible to clearly segment the word image into characters before recognition, segmentation and recognition are bound together in a dynamic programming process. Results are presented for a set of word images extracted from mailpieces in the live mailstream.
暂无评论