With the development of synthetic aperture radar (SAR) technology, the resolution of SAR images has been improved from tens of meters to centimeters (currently the highest). However, increasing image resolution from c...
详细信息
With the development of synthetic aperture radar (SAR) technology, the resolution of SAR images has been improved from tens of meters to centimeters (currently the highest). However, increasing image resolution from centimeters to millimeters will pose a significant challenge to the hardware and signal processingalgorithms of traditional radar systems. Fortunately, the emergence of microwave photonic technology assisted SAR systems in recent years is expected to solve this bottleneck. The system is theoretically capable of achieving a signal bandwidth of 10 GHz or even tens of GHz. However, the rapid increase in resolution will also bring new problems and challenges. For example, i) How to address the adverse effects of system instability under ultra-large signal bandwidth;ii) How to improve the imaging efficiency in the case of a sharp increase in data volume;iii) How to overcome the more complex signal characteristics faced by ultra-high-resolution imaging. Although microwave photonic SAR systems and data processing techniques are not yet sufficiently developed, they have great potential and will be a very hot and cutting-edge research field at present and in the future. In this paper, we will make a comprehensive review, summary, and prospect of the related research on microwave photonic SAR from three aspects of systems, experiments, and imagingprocessing. The purpose of this study is to provide some guiding suggestions and references for relevant researchers to promote the further development of microwave photonic SAR technology.
Obtaining quantitative geometry of the anterior segment of the eye, generally from optical coherence tomography (OCT) images, is important to construct 3D computer eye models, used to understand the optical quality of...
详细信息
Obtaining quantitative geometry of the anterior segment of the eye, generally from optical coherence tomography (OCT) images, is important to construct 3D computer eye models, used to understand the optical quality of the normal and pathological eye and to improve treatment (for example, selecting the intraocular lens to be implanted in cataract surgery or guiding refractive surgery). An important step to quantify OCT images is segmentation (i.e., finding and labeling the surfaces of interest in the images), which, for the purpose of feeding optical models, needs to be automatic, accurate, robust, and fast. In this work, we designed a segmentation algorithm based on deep learning, which we applied to OCT images from pre- and post-cataract surgery eyes obtained using anterior segment OCT commercial systems. We proposed a feature pyramid network architecture with a pre-trained encoder and trained, validated, and tested the algorithm using 1640 OCT images. We showed that the proposed method outperformed a classical image-processing-based approach in terms of accuracy (from 91.4% to 93.2% accuracy), robustness (decreasing the standard deviation of accuracy across images by a factor of 1.7), and processing time (from 0.48 to 0.34 s/image). We also described a method for the 3D models' construction and their quantification from the segmented images and applied the proposed segmentation/quantification algorithms to quantify 136 new eye measurements (780 images) obtained from OCT commercial systems. (c) 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
The image-to-image translation (I2IT) task aims to transform images from the source domain into the specified target domain. State-of-the-art CycleGAN-based translation algorithms typically use cycle consistency loss ...
详细信息
The image-to-image translation (I2IT) task aims to transform images from the source domain into the specified target domain. State-of-the-art CycleGAN-based translation algorithms typically use cycle consistency loss and latent regression loss to constrain translation. In this work, it is demonstrated that the model parameters constrained by the cycle consistency loss and the latent regression loss are equivalent to optimizing the medians of the data distribution and the generative distribution. In addition, there is a style bias in the translation. This bias interacts between the generator and the style encoder and visually exhibits translation errors, e.g. the style of the generated image is not equal to the style of the reference image. To address these issues, a new I2IT model termed high-quality-I2IT (HQ-I2IT) is proposed. The optimization scheme is redesigned to prevent the model from optimizing the median of the data distribution. In addition, by separating the optimization of the generator and the latent code estimator, the redesigned model avoids error interactions and gradually corrects errors during training, thereby avoiding learning the median of the generated distribution. The experimental results demonstrate that the visual quality of the images produced by HQ-I2IT is significantly improved without changing the generator structure, especially when guided by the reference images. Specifically, the Frechet inception distance on the AFHQ and CelebA-HQ datasets are reduced from 19.8 to 10.2 and from 23.8 to 17.0, respectively. In this work, it is demonstrated that the cycle consistency loss and latent regression loss in CycleGAN-based image translation models can be detrimental to image quality. The optimization scheme of CycleGAN-based image translation systems is redesigned and a new translation model named HQ-I2IT is proposed. Experiments demonstrate that the proposed method can significantly improve image quality and translation ***
Deep learning (DL)-based systems have emerged as powerful methods for the diagnosis and treatment of plant stress, offering high accuracy and efficiency in analyzing imagery data. This review paper aims to present a t...
详细信息
Deep learning (DL)-based systems have emerged as powerful methods for the diagnosis and treatment of plant stress, offering high accuracy and efficiency in analyzing imagery data. This review paper aims to present a thorough overview of the state-of-the-art DL technologies for plant stress detection. For this purpose, a systematic literature review was conducted to identify relevant articles for highlighting the technologies and approaches currently employed in the development of a DL-based plant stress detection system, specifically the advancement of image-based data collection systems, image preprocessing techniques, and deep learning algorithms and their applications in plant stress classification, disease detection, and segmentation tasks. Additionally, this review emphasizes the challenges and future directions in collecting and preprocessing image data, model development, and deployment in real-world agricultural settings. Some of the key findings from this review paper are: Training data: (i) Most plant stress detection models have been trained on Red Green Blue (RGB) images;(ii) Data augmentation can increase both the quantity and variation of training data;(iii) Handling multimodal inputs (e. g., image, temperature, humidity) allows the model to leverage information from diverse sources, which can improve prediction accuracy;Model Design and Efficiency: (i) Self-supervised learning (SSL) and Few-shot learning (FSL)-based methods may be better than transfer learning (TL)-based models for classifying plant stress when the number of labeled training images are scarce;(ii) Custom designed DL architectures for a specific stress and plant type can have better performance than the state-of-the-art DL architectures in terms of efficiency, overfitting, and accuracy;(iii) The multi-task learning DL structure reuses most of the network architecture while performing multiple tasks (e.g., estimate stress type and severity) simultaneously, which makes the learning much
Artificial Intelligence (AI) refers to the ability to learn, remember, predict, and make an optimal judgment based on Computer-assisted Design (CAD) systems. Traditional CAD algorithms and methods on head CT scans foc...
详细信息
Artificial Intelligence (AI) refers to the ability to learn, remember, predict, and make an optimal judgment based on Computer-assisted Design (CAD) systems. Traditional CAD algorithms and methods on head CT scans focused on the automatic recognition, segmentation, and classification of the abnormalities. But, these approaches were encountered with several limitations like (i) smaller dataset size, (ii) negative Transfer Learning, and (iii) improper localization. This paper proposed the new dense-layered deep neural model to classify brain hemorrhages using head CT scans. The proposed model is the ten-layered network having dense blocks with skip connections. It uses the cross-chained connection between dense blocks to minimize gradient loss while training. Later, the last layer of the model is extended with Grad-Cam for localization of the affected cell regions. The model performance is evaluated on a dataset of head CT scans of size 427.25GB. The dataset is partitioned into 752,800 images in the training set and 121,232 images in the testing set. The experimentation results achieved an accuracy of 98.32% with a mean logarithmic loss of 0.06487. The average classification accuracy of the proposed model on multiple-class hemorrhages is 98.27%. The experimentation results are found satisfactory having the best AUC-ROC accuracy of 98.32%. The comparative analysis of the model with other traditional deep neural networks proves the efficacy of the model in predicting results. Also, in comparison with other methods, the gained results are found satisfactory with an increase in the accuracy of 1.3%.
Responses in colorimetric sensor arrays are based on the monitoring color changes of chemical transducers by multichannel spectrophotometers or imaging recorders such as scanners, digital camera and smartphones. The i...
详细信息
Responses in colorimetric sensor arrays are based on the monitoring color changes of chemical transducers by multichannel spectrophotometers or imaging recorders such as scanners, digital camera and smartphones. The images are then digitized using different image analysis algorithms. The array based methods provide a multidimensional data containing analytical, noise and outlier information. The inconsistent data can be removed or decreased by data pre-processing methods. Based on the goal of researches, the refined data can be processed by the pattern recognition methods for qualitative analysis or a by the multivariate regression models for quantitative analysis. This article explains (i) the mechanism of data collection using different types of readers, (ii) introduces the types and applications of algorithms and software for image digitalization, (iii) represents information about data preprocessing, variable selection and multivariate statistical methods and (iv) evaluates their applications in processing of colorimetric sensor array data published during the last decade.
Context. Exoplanet detections and characterizations via direct imaging require high contrast and high angular resolution. These requirements are typically pursued by combining (i) cutting-edge instrumental facilities ...
详细信息
Context. Exoplanet detections and characterizations via direct imaging require high contrast and high angular resolution. These requirements are typically pursued by combining (i) cutting-edge instrumental facilities equipped with extreme adaptive optics and coronagraphic systems, (ii) optimized differential imaging to introduce a diversity between the signals of the sought-for objects and that of the star, and (iii) dedicated (post-)processingalgorithms to further eliminate the residual stellar ***. With respect to the third technique, substantial efforts have been undertaken over this last decade on the design of more efficient post-processingalgorithms. The whole data collection and retrieval processes currently allow to detect massive exoplanets at angular separations greater than a few tenths of au. The performance remains upper-bounded at shorter angular separations due to the lack of diversity induced by the processing of each epoch of observations individually. We aim to propose a new algorithm that is able to combine several observations of the same star by accounting for the Keplerian orbital motion across epochs of the sought-for exoplanets in order to constructively co-add their weak ***. The proposed algorithm, PACOME, integrates an exploration of the plausible orbits of the sought-for objects within an end-to-end statistical detection and estimation formalism. The latter is extended to a multi-epoch combination of the maximum likelihood framework of PACO, which is a post-processing algorithm of single-epoch observations. From this, we derived a reliable multi-epoch detection criterion, interpretable both in terms of probability of detection and of false alarm. In addition, PACOME is able to produce a few plausible estimates of the orbital elements of the detected sources and provide their local error ***. We tested the proposed algorithm on several datasets obtained from the VLT/SPHERE instrument with IRDIS and IFS usin
Molecular radiotherapy is a rapidly developing field with new vector and isotope combinations continually added to market. As with any radiotherapy treatment, it is vital that the absorbed dose and toxicity profile ar...
详细信息
Molecular radiotherapy is a rapidly developing field with new vector and isotope combinations continually added to market. As with any radiotherapy treatment, it is vital that the absorbed dose and toxicity profile are adequately characterised. Methodologies for absorbed dose calculations for radiopharmaceuticals were generally developed to characterise stochastic effects and not suited to calculations on a patient-specific basis. There has been substantial scientific and technological development within the field of molecular radiotherapy dosimetry to answer this challenge. The development of imagingsystems and advanced processing techniques enable the acquisition of accurate measurements of radioactivity within the body. Activity assessment combined with dosimetric models and radiation transport algorithms make individualised absorbed dose calculations not only feasible, but commonplace in a variety of commercially available software packages. The development of dosimetric parameters beyond the absorbed dose has also allowed the possibility to characterise the effect of irradiation by including biological parameters that account for radiation absorbed dose rates, gradients and spatial and temporal energy distribution heterogeneities. Molecular radiotherapy is in an exciting time of its development and the application of dosimetry in this field can only have a positive influence on its continued progression. (C) 2020 Published by Elsevier Ltd on behalf of The Royal College of Radiologists.
We propose a new approach for non-Cartesian magnetic resonance image reconstruction. While unrolled architectures provide robustness via data-consistency layers, embedding measurement operators in Deep Neural Network ...
详细信息
ISBN:
(数字)9789464593617
ISBN:
(纸本)9798331519773
We propose a new approach for non-Cartesian magnetic resonance image reconstruction. While unrolled architectures provide robustness via data-consistency layers, embedding measurement operators in Deep Neural Network (DNN) can become impractical at large scale. Alternative Plug-and-Play (PnP) approaches, where the denoising DNNs are blind to the measurement setting, are not affected by this limitation and have also proven effective, but their highly iterative nature also affects scalability. To address this scalability challenge, we leverage the “Residual-to-Residual DNN series for high-Dynamic range imaging (R2D2)” approach recently introduced in astronomical imaging. R2D2's reconstruction is formed as a series of residual images, iteratively estimated as outputs of DNNs taking the previous iteration's image estimate and associated data residual as inputs. The method can be interpreted as a learned version of the Matching Pursuit algorithm. We demonstrate R2D2 in simulation, considering radial k-space sampling acquisition sequences. Our preliminary results suggest that R2D2 achieves: (i) suboptimal performance compared to its unrolled incarnation R2D2-Net, which is however non-scalable due to the necessary embedding of NUFFT-based data-consistency layers; (ii) superior reconstruction quality to a scalable version of R2D2-Net embedding an FFT-based approximation for data consistency; (iii) superior reconstruction quality to PnP, while only requiring few iterations.
Tomosynthesis offers an alternative to planar radiography providing pseudo-tomographic information at a much lower radiation dose than CT. The fact that it cannot convey information about the density poses a major lim...
详细信息
暂无评论