We have studied the properties of two iterative reconstruction algorithms, namely, the maximum likelihood with expectation maximization (ML-EM) and the weighted least squares with conjugate gradient (WLS-CG) algorithm...
详细信息
We have studied the properties of two iterative reconstruction algorithms, namely, the maximum likelihood with expectation maximization (ML-EM) and the weighted least squares with conjugate gradient (WLS-CG) algorithms, for use in compensation for attenuation and detector response in cardiac SPECT imaging. A realistic phantom, derived from a patient X-ray CT study to simulate 201T1 SPECT data, was used in the investigation. Both algorithms are effective in compensating for the nonuniform attenuation distribution in the thorax region and the spatially variant detector response function of the imaging system. At low iteration numbers, the addition of detector response compensation provides improvement in both spatial resolution and image noise when compared with attenuation compensation alone. However, at higher iteration numbers, there is a more rapid increase in image noise when detector response compensation is included, and the increase is more dramatic for the WLS-CG algorithm. In general, the convergence rate of the WLS-CG algorithm is about ten times that of the ML-EM algorithm. Also, the WLS-CG exhibits a faster increase in image noise at large iteration numbers than the ML-EM algorithm. This study is valuable in the search for useful and practical reconstruction methods for improved clinical cardiac SPECT imaging.
Visual evaluation of many magnetic resonance images is a difficult task. Therefore, computer-assisted brain tumor classification techniques have been proposed. These techniques have several drawbacks or limitations. C...
详细信息
Visual evaluation of many magnetic resonance images is a difficult task. Therefore, computer-assisted brain tumor classification techniques have been proposed. These techniques have several drawbacks or limitations. Capsule based neural networks are new approaches that can preserve spatial relationships of learned features using dynamic routing algorithm. By this way, not only performance of tumor recognition increases but also sampling efficiency and generalisation capability improves. Therefore, in this work, a Capsule Network (CapsNet) is used to achieve fully automated classification of tumors from brain magnetic resonance images. In this work, prevalent three types of tumors (pituitary, glioma and meningioma) have been handled. The main contributions in this paper are as follows: 1) A comprehensive review on CapsNet based methods is presented. 2) A new CapsNet topology is designed by using a Sobolev gradient-based optimisation, expectation-maximisation based dynamic routing and tumor boundary information. 3) The network topology is applied to categorise three types of brain tumors. 4) Comparative evaluations of the results obtained by other methods are performed. According to the experimental results, the proposed CapsNet based technique can achieve extraction of desired features from image data sets and provides tumor classification automatically with 92.65% accuracy.
Dynamic texture (DT) classification has attracted extensive attention in the field of image sequence analysis. The probability distribution model, which has been used to analysis DT, can describe well the distribution...
详细信息
Dynamic texture (DT) classification has attracted extensive attention in the field of image sequence analysis. The probability distribution model, which has been used to analysis DT, can describe well the distribution property of signals. Here, the authors introduce the finite mixtures of Gumbel distributions (MoGD) and the corresponding parameter estimation method based on expectation-maximisation algorithm. Then, the authors propose the DT features based on MoGD model for DT classification. Specifically, after decomposing DTs with the dual-tree complex wavelet transform (DT-CWT), the median values of complex wavelet coefficient magnitudes of non-overlapping blocks in detail subbands are modelled with MoGDs. The model parameters are accumulated into a feature vector to describe DT. During the classification, a variational approximation version of the Kullback-Leibler divergence is used to measure the similarity between different DTs. The experimental evaluations on two popular benchmark DT data sets (UCLA and DynTex++) demonstrate the effectiveness of the proposed approach.
In this study, the authors address the problem of multiple antenna spectrum sensing in cognitive radios by exploiting the prior information about unknown parameters. Specifically, under assumption that unknown paramet...
详细信息
In this study, the authors address the problem of multiple antenna spectrum sensing in cognitive radios by exploiting the prior information about unknown parameters. Specifically, under assumption that unknown parameters are random with the given proper distributions, the authors use a Bayesian generalised likelihood ratio test (B-GLRT) in order to derive the corresponding detectors for three different scenarios: (i) only the channel gains are unknown to the secondary user (SU), (ii) only the noise variance is unknown to the SU, (iii) both the channel gains and noise variance are unknown to the SU. For the first and third scenarios, the authors use the iterative expectationmaximisationalgorithm for estimation of unknown parameters and the authors derive their convergence rate. It is shown that the proposed B-GLRT detectors have low complexity and besides are optimal even under the finite number of samples. The simulation results demonstrate that the proposed B-GLRT detectors have an acceptable performance even under the finite number of samples and also outperform the related recently proposed detectors for multiple antenna spectrum sensing.
Simultaneous rest Tc-99m-Sestamibi/I-123-BMIPP cardiac SPECT imaging has the potential to replace current clinical Tc-99m-Sestamibi rest/stress imaging and therefore has great potential in the case of patients with ch...
详细信息
Simultaneous rest Tc-99m-Sestamibi/I-123-BMIPP cardiac SPECT imaging has the potential to replace current clinical Tc-99m-Sestamibi rest/stress imaging and therefore has great potential in the case of patients with chest pain presenting to the emergency department. Separation of images of these two radionuclides is difficult, however, because their emission energies are close. The authors previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm (MC-JOSEM), which simultaneously compensates for scatter and cross talk as well as detector response within the reconstruction algorithm. In this work, the authors evaluated the performance of MC-JOSEM in a realistic population of Tc-99m/I-123 studies using cardiac phantom data on a Siemens *** system using a standard cardiac protocol. The authors also compared the performance of MC-JOSEM for estimation tasks to that of two other methods: standard OSEM using photopeak energy windows without scatter correction (NSC-OSEM) and standard OSEM using a Compton-scatter energy window for scatter correction (SC-OSEM). For each radionuclide the authors separately acquired high-count projections of radioactivity in the myocardium wall, liver, and soft tissue background compartments of a water-filled torso phantom, and they generated synthetic projections of various dual-radionuclide activity distributions. Images of different combinations of myocardium wall/background activity concentration ratios for each radionuclide were reconstructed by NSC-OSEM, SC-OSEM, and MC-JOSEM. For activity estimation in the myocardium wall, MC-JOSEM always produced the best relative bias and relative standard deviation compared with NSC-OSEM and SC-OSEM for all the activity combinations. On average, the relative biases after 100 iterations were 8.1% for Tc-99m and 3.7% for I-123 with MC-JOSEM, 39.4% for Tc-99m and 23.7% for I-123 with NSC-OSEM, and 20.9% for Tc-99m with SC-O
In this study, the authors address a two-dimensional (2D) shape registration problem on data with anisotropic-scale deformation and noise. First, the model is formulated under the iterative closest point (ICP) framewo...
详细信息
In this study, the authors address a two-dimensional (2D) shape registration problem on data with anisotropic-scale deformation and noise. First, the model is formulated under the iterative closest point (ICP) framework, which is one of the most popular methods for shape registration. To overcome the effect of noise, the expectationmaximisationalgorithm is used to improve the model. Then, the structure of Lie groups is adopted to parameterise the proposed model, which provides a unified framework to deal with the shape registration problems. Such representation makes it possible to introduce some suitable constraints to the model, which improves the robustness of the algorithm. Thereby, the 2D shape registration problem is turned to an optimisation problem on the matrix Lie group. Furthermore, a sequence of quadratic programming is designed to approximate the solution for the model. Finally, several comparative experiments are carried out to validate that the authors' algorithm performs well in terms of robustness, especially in the presence of outliers.
This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, mea...
详细信息
This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices) are estimated by the expectation Maximization algorithm (EM) which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.
In image processing, image segmentation is the process of partitioning a digital image into multiple image segments. Among state-of-the-art methods, Markov random fields can be used to model dependencies between pixel...
详细信息
In image processing, image segmentation is the process of partitioning a digital image into multiple image segments. Among state-of-the-art methods, Markov random fields can be used to model dependencies between pixels and achieve a segmentation by minimising an associated cost function. Currently, finding the optimal set of segments for a given image modelled as a Markov random fields appears to be NP-hard. The authors aim to take advantage of the exponential scalability of quantum computing to speed up the segmentation of synthetic aperture radar images. For that purpose, the authors propose a hybrid quantum annealing classical optimisation expectationmaximisationalgorithm to obtain optimal sets of segments. After proposing suitable formulations, the authors discuss the performances and the scalability of authors' approach on the D-Wave quantum computer. The authors also propose a short study of optimal computation parameters to enlighten the limits and potential of the adiabatic quantum computation to solve large instances of combinatorial optimisation problems. Among state-of-the-art methods, Markov random fields (MRF) can be used to model dependencies between pixels and achieve a segmentation by minimising an associated cost function. Currently, finding the optimal set of segments for a given image modelled as a MRF appears to be NP-hard. The authors aim to take advantage of the exponential scalability of quantum computing to speed up the segmentation of synthetic aperture radar ***
Capture-recapture methods are used to estimate the prevalence of diseases in the field of epidemiology. The information used for estimation purposes are available from multiple lists, whereby giving rise to the proble...
详细信息
Capture-recapture methods are used to estimate the prevalence of diseases in the field of epidemiology. The information used for estimation purposes are available from multiple lists, whereby giving rise to the problems of list dependence and heterogeneity. In this paper, modelling is focused on the heterogeneity part. We present a new binomial latent class model which takes into account both the observed and unobserved heterogeneity within capture-recapture data. We adopt the conditional likelihood approach and perform estimation via the EM algorithm. We also derive the mathematical expressions for the computation of the standard error of the unknown population size. An application to data on diabetes patients in a town in northern Italy is discussed. (c) 2009 Elsevier B.V. All rights reserved.
Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population wi...
详细信息
Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5-30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was reduced by half for the QSPECT method (the mean error changed by < 1%, e.g., 0.9%-0.3%=0.6% for the spleen). The magnitude of the errors and variations in errors for large organ with high uptake were still acceptable for 1.5 min scans, even though the errors were slightly larger than those for the 30 min scans (i.e., < 2% for liver, < 3% for heart). The errors over the ranges of scan times studied for the QPlanar method were all within 0.3% for all organs. Conclusions: These data indicate that, for the purposes of organ activity estimation, acquisition times could be reduced at least by a factor of 2 for the QSPECT and QPlanar methods with little effect on the errors
暂无评论