Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK metho...
详细信息
Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK method suffers from significant distortions and poor plane-to-plane resolutions without full volumetric information, which severely limits its capability for image guidance. Although existing deep learning-based methods showed feasibilities in restoring volumetric information in DTS, they ignored the inter-patient variabilities by training the model using group patients. Consequently, the restored images still suffered from blurred and inaccurate edges. In this study, we presented a DTS enhancement method based on a patient-specific deep learning model to recover the volumetric information in DTS images. The main idea is to use the patient-specific prior knowledge to train the model to learn the patient-specific correlation between DTS and the ground truth volumetric images. To validate the performance of the proposed method, we enrolled both simulated and real on-board projections from lung cancer patient data. Results demonstrated the benefits of the proposed method: (1) qualitatively, DTS enhanced by the proposed method shows CT-like high image quality with accurate and clear edges;(2) quantitatively, the enhanced DTS has low-intensity errors and high structural similarity with respect to the ground truth CT images;(3) in the tumor localization study, compared to the ground truth CT-CBCT registration, the enhanced DTS shows 3D localization errors of <= 0.7 mm and <= 1.6 mm for studies using simulated and real projections, respectively;and (4), the DTS enhancement is nearly real-time. Overall, the proposed method is effective and efficient in enhancing DTS to make it a valuable tool for IGRT applications.
The interpretation and analysis of Magnetic resonance imaging (MRI) benefit from high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is time-consuming and costly, which increases ...
详细信息
The interpretation and analysis of Magnetic resonance imaging (MRI) benefit from high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is time-consuming and costly, which increases the potential for motion artifact, and suffers from reduced signal-to-noise ratio (SNR). Super-resolution reconstruction (SRR) is one of the most widely used methods in MRI since it allows for the trade-off between high spatial resolution, high SNR, and reduced scan times. Deep learning has emerged for improved SRR as compared to conventional methods. However, current deep learning-based SRR methods require large-scale training datasets of high-resolution images, which are practically difficult to obtain at a suitable SNR. We sought to develop a methodology that allows for dataset-free deep learning-based SRR, through which to construct images with higher spatial resolution and of higher SNR than can be practically obtained by direct Fourier encoding. We developed a dataset-free learning method that leverages a generative neural network trained for each specific scan or set of scans, which in turn, allows for SRR tailored to the individual patient. With the SRR from three short duration scans, we achieved high quality brain MRI at an isotropic spatial resolution of 0.125 cubic mm with six minutes of imaging time for T2 contrast and an average increase of 7.2 dB (34.2%) in SNR to these short duration scans. Motion compensation was achieved by aligning the three short duration scans together. We assessed our technique on simulated MRI data and clinical data acquired from 15 subjects. Extensive experimental results demonstrate that our approach achieved superior results to state-of-the-art methods, while in parallel, performed at reduced cost as scans delivered with direct high-resolution acquisition.
The escalating prevalence of diabetes mellitus presents a pressing global health challenge, with projections indicating a further rise in both type 1 and type 2 diabetes cases. While type 1 diabetes necessitates exter...
详细信息
ISBN:
(纸本)9798350329537;9798350329520
The escalating prevalence of diabetes mellitus presents a pressing global health challenge, with projections indicating a further rise in both type 1 and type 2 diabetes cases. While type 1 diabetes necessitates external insulin administration due to pancreatic insufficiency, type 2 diabetes may also require insulin supplementation, albeit in lesser amounts. Traditional insulin delivery methods, involving multiple daily injections, can be burdensome for patients, particularly those with type 1 diabetes, which requires higher insulin doses. To mitigate these challenges, researchers have developed artificial pancreas systems-a closed-loop approach capable of automatically measuring blood glucose levels and administering insulin. These systems integrate insulin pumps with continuous glucose monitors (CGMs) to maintain glycemic control. However, the effectiveness of these systems heavily depends on the control algorithm employed. While conventional control engineering algorithms have been predominant, recent advancements in artificial intelligence (AI) have led to the exploration of AI-based controllers, particularly those leveraging reinforcement learning (RL). RL-based algorithms, categorized as semi-supervised learning, demonstrate promise in adapting to individual patient dynamics without the need for labeled data. This review investigates the utilization of various mathematical models, such as UVA/PADOVA, Bergman's minimal, and Hovorka models, in conjunction with RL-based algorithms in diabetes management. By examining existing research findings, we elucidate the efficacy of RL-based approaches in patient-specific learning and offer insights into future directions for optimizing diabetes management strategies.
暂无评论