Consider a scenario where a distributed signal is sparse and is acquired by various sensors that see different versions. Thus, we have a set of sparse signals with both some common parts, and some variations. The ques...
详细信息
Consider a scenario where a distributed signal is sparse and is acquired by various sensors that see different versions. Thus, we have a set of sparse signals with both some common parts, and some variations. The question is how to acquire such signals and how to reconstruct them perfectly (noiseless case) or approximately (noisy case). We propose an extension of the annihilating filter method [3] to this distributed scenario. We model the inter-relation between the sparse signals by introducing three joint sparse models. For each model, we propose sensing and reconstruction algorithms that reduce the number of measurements below the limit for the single sensor scenario and results in power and bandwidth reduction in the system. In the noiseless scenario, we are close to the minimum number of measurements possible for the perfect reconstruction while by taking more measurements, we introduce redundancy in the system to effectively mitigate the noise. Simulation results justify the applicability of the approach.
This study compared single-orbit cone-beam SPECT reconstructed image quality obtained from the Feldkamp algorithm, the Grangeat algorithm and the nonstationary filtering (NSF) algorithm which the authors previously de...
详细信息
This study compared single-orbit cone-beam SPECT reconstructed image quality obtained from the Feldkamp algorithm, the Grangeat algorithm and the nonstationary filtering (NSF) algorithm which the authors previously developed. Acquired data from two physical phantoms, a 3D Defrise slab phantom and a 3D Hoffman brain phantom, were processed using the 3 algorithms and the results were compared in terms of image artifacts, noise, and processing time. From the slab phantom data, the 3 algorithms produced similar artifacts which were related to single-orbit cone-beam geometry. From brain phantom data, the Grangeat algorithm produced additional artifacts and distortions, while the NSF algorithm slightly reduced the cone-beam related artifacts compared to the Feldkamp algorithm. The processing time required by the Grangeat and NSF algorithms were respectively 2 and 4 times of that needed by the Feldkamp algorithm. Currently, the Feldkamp algorithm appears the best choice in terms of both image quality and processing time for single-orbit cone-beam SPECT with a cone angle less than 15/spl deg/.
We quantitatively compare filtered backprojection (FBP), expectation maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner, a small to intermediate field of view dedicated researc...
详细信息
We quantitatively compare filtered backprojection (FBP), expectation maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner, a small to intermediate field of view dedicated research scanner. A key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward operator of EM and the Bayesian reconstruction algorithms. Our results indicate that, particularly when an accurate system kernel is used, Bayesian methods can significantly improve reconstruction quality over FBP and EM.
This work presents the experimental characterization of position sensitivity accuracy when implementing machine learning algorithms for the reconstruction of γ rays interaction points in Anger cameras for medical ima...
详细信息
ISBN:
(数字)9781665421133
ISBN:
(纸本)9781665421140
This work presents the experimental characterization of position sensitivity accuracy when implementing machine learning algorithms for the reconstruction of γ rays interaction points in Anger cameras for medical imaging. The work aims to show state-of-the-art position sensitivity when using an embeddable and fast Decision Tree classifier, combined with Principal Component Analysis. The training dataset acquisition procedure is discussed, together with clustering techniques implemented for the automation of data labeling. To overcome the impossibility to distinguish adjacent classes in the training set with sub-mm distance between classes when using a regular parallel-hole collimator, we implemented two innovative solutions for the training procedure: a custom mechanical structure scanning the module used in experimental characterization, and the use of data augmentation to generate new fractions of training dataset from perturbation of the experimental dataset. Preliminary results have been obtained using the INSERT gamma camera, where a 50×100×8 mm 3 CsI(Tl) scintillation crystal is coupled to 8 tiles of RGB-HD SiPMs, and the charge signal is collected by two 36-channel ANGUS ASICs. The energy resolution equals 17% at 140 keV, and the module is capable to acquire events up to ~10 kcps. Count rate is only limited by typical detector performance (e.g. front-end count rate capability, data transmission bandwidth); in fact, implementation of Decision Tree classification in FPGA (free of data pre-processing) would allow edge-computing of the position sensitivity, with sub-µs data latency. Experimental results show 1.2 mm FWHM spatial accuracy, which is equal to the accuracy obtained with the specific module under test when implementing statistical position reconstruction algorithms such as maximum likelihood estimation.
This paper presents new helical weighting algorithms associated with an eight-row multislice CT development. The five related methods underlying the reconstruction algorithms are described. For an eight-row scanner, t...
详细信息
This paper presents new helical weighting algorithms associated with an eight-row multislice CT development. The five related methods underlying the reconstruction algorithms are described. For an eight-row scanner, the helical pitch useful range is 0 to 16. Pitches may be classified into "high-quality" modes, where for every line integral at least two measurements are acquired (0 to 8), and "high-speed" where at least one measurement of every relevant line integral is acquired (8 to 16). Also, helical weighting methods may rely exclusively on row-to-row interpolation, to improve temporal resolution at the expense of the slice sensitivity profile, or use conjugate ray interpolation for improved z-resolution. It is first shown that, for multislice systems in high-speed modes, or multislice systems with a large number of rows, the sampling varies considerably within the slice. To this variable z-sampling is associated a specific numerical instability risk that requires regularization. Two methods of handling these discontinuities are introduced. Further, an eight slice system is sensitive to cone-beam artifacts. Four specific methods of significantly reducing the image artifacts, without requiring a cone-beam backprojector, are described and discussed. Their effectiveness is advantageously compared to the traditional Feldkamp method as adapted to helical third-generation CT systems.
Effects of the application, and implementation, of various reconstruction algorithms to 2D and 3D PET data were studied. Results were compared based on the performance of each method when subjected to the NEMA perform...
详细信息
Effects of the application, and implementation, of various reconstruction algorithms to 2D and 3D PET data were studied. Results were compared based on the performance of each method when subjected to the NEMA performance tests for resolution, scatter correction accuracy, and uniformity. An independent measurement of the coefficient of variation for a uniform cylinder was also made. Resulting data was reconstructed using each of the ECAT 7.1, STIR, and ECAT 7.2 software packages. Two implementations of the STIR software were investigated, STIR-INT1 which backprojects the original sinograms, and STIR-INT2 which first interpolates the sinograms to have equal bin and voxel sizes. Behaviour of the spatial resolution and CV in terms of varying pixel size was attributed to both the method of backprojector implementation and the reconstruction algorithm (FORE vs. 3DRP). An understanding of the method for reconstruction is important in understanding image quality, and selecting a pixel size for clinical use
To reduce the radiation dose delivered to patients, a number of novel computed tomography (CT) reconstruction algorithms have been proposed to recover images from the sparsely sampled datasets or the datasets from low...
详细信息
To reduce the radiation dose delivered to patients, a number of novel computed tomography (CT) reconstruction algorithms have been proposed to recover images from the sparsely sampled datasets or the datasets from low dose exposure. However, the performance of these algorithms has not been quantitatively evaluated with realistic CT datasets in an easily reproducible fashion. Here, we present four CT phantom datasets acquired from our bench-top micro-CT system. Such datasets can be used to provide the baseline for comparison among various CT reconstruction algorithms, in terms of noise level, contrast-to-noise ratio (CNR), uniformity, spatial resolution and CT number accuracy.
Katsevich proposed a reconstruction algorithm for helical cone-beam CT which is theoretically exact. Even for large cone angles, it performs very well on static objects. But due to its limitation to the PI illuminatio...
详细信息
Katsevich proposed a reconstruction algorithm for helical cone-beam CT which is theoretically exact. Even for large cone angles, it performs very well on static objects. But due to its limitation to the PI illumination window, it produces strong artefacts in the presence of motion. We propose a method to reduce these spreading motion artefacts significantly by adding a correction image. It uses a small portion of the voxel-dependent over-scan. Results of a simulation study with a 128 row scanner are shown.
暂无评论