In this paper, the graphic processor unit (GPU) implementation of the finite-difference time domain (FDTD) algorithm is presented to investigate the electromagnetic (EM) scattering from one dimensional (1-D) Gaussian ...
详细信息
ISBN:
(纸本)9780819497642
In this paper, the graphic processor unit (GPU) implementation of the finite-difference time domain (FDTD) algorithm is presented to investigate the electromagnetic (EM) scattering from one dimensional (1-D) Gaussian rough soil surface. The FDTD lattices are truncated by uniaxial perfectly matched layer (UPML), in which the finite-difference equations are carried out for the total computation domain. Using Compute Unified Device Architecture (CUDA) technology, significant speedup ratios are achieved for different incident frequencies, which demonstrates the efficiency of GPU accelerated the FDTD method. The validation of our method is verified by comparing the numerical results with these obtained by CPU, which shows favorable agreements.
The development of tools for the processing of color images is often complicated by nonstandardness - the notion that different image regions corresponding to the same tissue will occupy different ranges in the color ...
详细信息
ISBN:
(纸本)9780819494504
The development of tools for the processing of color images is often complicated by nonstandardness - the notion that different image regions corresponding to the same tissue will occupy different ranges in the color spectrum. In digital pathology (DP), these issues are often caused by variations in slide thickness, staining, scanning parameters, and illumination. Nonstandardness can be addressed via standardization, a pre-processing step that aims to improve color constancy by realigning color distributions of images to match that of a pre-defined template image. Unlike color normalization methods, which aim to scale (usually linearly or assuming that the transfer function of the system is known) the intensity of individual images, standardization is employed to align distributions in broad tissue classes (e.g. epithelium, stroma) across different DP images irrespective of institution, protocol, or scanner. Intensity standardization has previously been used for addressing the issue of intensity drift in MRI images, where similar tissue regions have different image intensities across scanners and patients. However, this approach is a global standardization (GS) method that aligns histograms of entire images at once. By contrast, histopathological imagery is complicated by the (a) additional information present in color images and (b) heterogeneity of tissue composition. In this paper, we present a novel color expectationmaximization (EM) based standardization (EMS) scheme to decompose histological images into independent tissue classes (e.g. nuclei, epithelium, stroma, lumen) via the EM algorithm and align the color distributions for each class independently. Experiments are performed on prostate and oropharyngeal histopathology tissues from 19 and 26 patients, respectively. Evaluation methods include (a) a segmentation-based assessment of color consistency in which normalized median intensity (NMI) is calculated from segmented regions across a dataset and (b) a qu
Multi-atlas segmentation methods are among the most accurate approaches for the automatic labeling of magnetic resonance (MR) brain images. The individual segmentations obtained through multi-atlas propagation can be ...
详细信息
ISBN:
(纸本)9780819494436
Multi-atlas segmentation methods are among the most accurate approaches for the automatic labeling of magnetic resonance (MR) brain images. The individual segmentations obtained through multi-atlas propagation can be combined using an unweighted or locally weighted fusion strategy. Label overlaps can be further improved by refining the label sets based on the image intensities using the expectation-Maximisation (EM) algorithm. A drawback of these approaches is that they do not consider knowledge about the statistical intensity characteristics of a certain anatomical structure, especially its intensity variance. In this work we employ learned characteristics of the intensity distribution in various brain regions to improve on multi-atlas segmentations. Based on the intensity profile within labels in a training set, we estimate a normalized variance error for each structure. The boundaries of a segmented region are then adjusted until its intensity characteristics are corrected for this variance error observed in the training sample. Specifically, we start with a high-probability "core" segmentation of a structure, and maximise the similarity with the expected intensity variance by enlarging it. We applied the method to 35 datasets of the OASIS database for which manual segmentations into 138 regions are available. We assess the resulting segmentations by comparison with this gold-standard, using overlap metrics. Intensity-based statistical correction improved similarity indices (SI) compared with EM-refined multi-atlas propagation from 75.6% to 76.2% on average. We apply our novel correction approach to segmentations obtained through either a locally weighted fusion strategy or an EM-based method and show significantly increased similarity indices.
The problem of event detection in multimedia clips is typically handled by modeling each of the component modalities independently, then combining their detection scores in a late fusion approach. One of the problems ...
详细信息
ISBN:
(纸本)9780819494405
The problem of event detection in multimedia clips is typically handled by modeling each of the component modalities independently, then combining their detection scores in a late fusion approach. One of the problems of a late fusion model in the multimedia setting is that the detection scores may be missing from one or more components for a given clip;e.g., when there is no speech in the clip;or when there is no overlay text. Standard fusion techniques typically address this problem by assuming a default backoff score for a component when its detection score is missing for a clip. This may potentially bias the fusion model, especially if there are many missing detections from a given component. In this work, we present the Sparse Conditional Mixture Model (SCMM) which models only the observed detection scores for each example, thereby avoiding making any assumptions about the distributions of the scores that are made by backoff models. Our experiments in multi-media event detection using the TRECVID-2011 corpus demonstrates that SCMM achieves statistically significant performance gains over standard late fusion techniques. The SCMM model is very general and is applicable to fusion problems with missing data in any domain.
A color-transfer method that can transfer colors of an image to another for the local regions using their dominant colors is proposed. In order to naturally transfer the colors and moods of a target image to a source ...
详细信息
A color-transfer method that can transfer colors of an image to another for the local regions using their dominant colors is proposed. In order to naturally transfer the colors and moods of a target image to a source image, we need to find the local regions of colors that need to be modified in the image. Since the dominant colors of each image can be used for the estimation of color regions, we develop a grid-based mode detection, which can efficiently estimate dominant colors of an image. Based on these dominant colors, our proposed method performs a consistent segmentation of source and target images by using the cost-volume filtering. Through the segmentation procedure, we can estimate complex color characteristics and transfer colors to the local regions of an image. For an intuitive and natural color transfer, region matching is also crucial. Therefore, we use the visually prominent colors in the image to meaningfully connect each segmented region in which modified color-transfer method is applied to balance the overall luminance of the final result. In the Experimental Result section, various convincing results are demonstrated through the proposed method. (C) 2013 SPIE and IS&T
In order to fit an unseen surface using statistical shape model (SSM), a correspondence between the unseen surface and the model needs to be established, before the shape parameters can be estimated based on this corr...
详细信息
ISBN:
(纸本)9780819489630
In order to fit an unseen surface using statistical shape model (SSM), a correspondence between the unseen surface and the model needs to be established, before the shape parameters can be estimated based on this correspondence. The correspondence and parameter estimation problem can be modeled probabilistically by a Gaussian mixture model (GMM), and solved by expectation-maximization iterative closest points (EM-ICP) algorithm. In this paper, we propose to exploit the linearity of the principal component analysis (PCA) based SSM, and estimate the parameters for the unseen shape surface under the EM-ICP framework. The symmetric data terms are devised to enforce the mutual consistency between the model reconstruction and the shape surface. The a priori shape information encoded in the SSM is also included as regularization. The estimation method is applied to the shape modeling of the hippocampus using a hippocampal SSM.
The Histogram Probabilistic Multi-Hypothesis Tracker (H-PMHT) is a parametric track-before-detect algorithm that has been shown to give good performance at a relatively low computation cost. Recent research has extend...
详细信息
ISBN:
(纸本)9780819490711
The Histogram Probabilistic Multi-Hypothesis Tracker (H-PMHT) is a parametric track-before-detect algorithm that has been shown to give good performance at a relatively low computation cost. Recent research has extended the algorithm to allow it to estimate the signature of targets in the sensor image. This paper shows how this approach can be adapted to address the problem of group target tracking where the motion of several targets is correlated. The group structure is treated as the target signature, resulting in a two-tiered estimator for the group bulk-state and group element relative position.
In this paper we present an approach for tracking in long range radar scenarios. We show that in these scenarios the extended Kalman filter is not desirable as it suffers from major consistency problems, and that part...
详细信息
ISBN:
(纸本)9780819490711
In this paper we present an approach for tracking in long range radar scenarios. We show that in these scenarios the extended Kalman filter is not desirable as it suffers from major consistency problems, and that particle filters may suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and the divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter and the Gaussian Mixture Sigma-Point Particle Filter are shown to avoid this diversity problem while producing consistent results.
One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, ...
详细信息
ISBN:
(纸本)9780819489388
One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.
Colorization is the process of adding colors to monochrome images. State-of-the-art colorization methods can be generally categorized into example-based colorization and scribble-based algorithms. In this paper, we pr...
详细信息
Colorization is the process of adding colors to monochrome images. State-of-the-art colorization methods can be generally categorized into example-based colorization and scribble-based algorithms. In this paper, we present a new scribble-based colorization algorithm based on Bayesian inference and nonlocal likelihood computation. We convert the process of image colorization to a probability optimization problem in this Bayesian framework, where we use nonlocal-mean likelihood computation and Markov random field prior's. The expectationmaximization method is used to solve an optimization object function. Finally, experimental results demonstrate the effectiveness of the proposed algorithm. (C) 2011 SPIE and IS&T. [DOI: 10.1117/1.3582139]
暂无评论