How to reduce radiation dose while preserving the image quality as when using standard dose is an important topic in the computed tomography (CT) imaging domain because the quality of low-dose CT (LDCT) images is ofte...
详细信息
How to reduce radiation dose while preserving the image quality as when using standard dose is an important topic in the computed tomography (CT) imaging domain because the quality of low-dose CT (LDCT) images is often strongly affected by noise and artifacts. Recently, there has been considerable interest in using deep learning as a post-processing step to improve the quality of reconstructed LDCT images. This paper provides, first, an overview of learning-based LDCT image denoising methods from patch-based early learning methods to state-of-the-art CNN-based ones and, then, a novel CNN-based method is presented. In the proposed method, preprocessing and post-processing techniques are integrated into a dilated convolutional neural network to extend receptive fields. Hence, large distance pixels in input images will participate in enriching feature maps of the learned model, leading to effective denoising. Experimental results showed that the proposed method is light, while its denoising effectiveness is competitive with well-known CNN-based models.
In recent years, medical imaging-based disease detection and diagnosis have become a mainstream practice in the medical industry. Various computer-aided reconstruction models assist medical practitioners to detect tum...
详细信息
In recent years, medical imaging-based disease detection and diagnosis have become a mainstream practice in the medical industry. Various computer-aided reconstruction models assist medical practitioners to detect tumors and polyps, thereby diagnosing them more effectively. Deep learning is an emerging computer vision method that has seen tremendous growth in this field, but labeling data in medical imaging is difficult and inefficient, making it extremely expensive to train for supervised learning methods. For unsupervised learning models, the intrinsic logic limits its effectiveness on unlabeled data. In this paper, in order to address this challenge, traditional geometric 3D model reconstruction methods are combined with the latest supervised deep learning models to propose a novel GeoVNet, a semi-supervised model. The proposed GeoVNet model is provided to the intestinal dataset for reconstruction and segmentation. Finally, the experimental evaluations are conducted for various performance metrics, and the precision rate attained is 98.16%, recall rate achieved is 97.9%, IoU as well as dice coefficient obtained are 83.5% as well as 90.9%, respectively. This shows that proposed approach outperforms unlabeled multi-device MRI intestinal imaging methods, and common models such as 3DUNet and DeepLabV3. In addition to this, the ablation analysis is conducted in order to validate and determine the effect of model hyperparameters on segmentation results.
The detection and classification of power quality (PQ) disturbances remains a significant challenge because of the rapid integration of renewable energy sources (RES), widespread use of power electronics, and increasi...
详细信息
The detection and classification of power quality (PQ) disturbances remains a significant challenge because of the rapid integration of renewable energy sources (RES), widespread use of power electronics, and increasing prevalence of sensitive microcontrollers. These evolving PQ issues necessitate the development of accurate and reliable methods for identifying and classifying PQ disturbances. In this paper, we propose a novel model based on a deep convolutional neural network (DCNN) for the feature extraction and classification of PQ disturbances. The architecture of the model was inspired by the visual geometry group (VGG), which is known for its effectiveness in imageprocessing. The extracted features are highly suitable for both multi-class (MC) and multi-label (ML) classification tasks, effectively addressing the complexity of PQ disturbance signals. The ML approach proved its excellence in the classification of complex PQ disturbances. The performance of the model was rigorously evaluated using various metrics across different scenarios, which demonstrated exceptional accuracy and robustness. The model was trained, validated, and tested using synthetically generated data under different signal-to-noise ratio (SNR) scenarios ensuring its effectiveness in practical applications.
Reconstructing neural radiance fields with explicit volumetric grids can significantly improve training and inference efficiency. However, these methods store features on voxels, inevitably increasing the cost of stor...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
Reconstructing neural radiance fields with explicit volumetric grids can significantly improve training and inference efficiency. However, these methods store features on voxels, inevitably increasing the cost of storage and transmission for scenes. To solve this limitation, some recent works are inspired by model compression methods, reducing storage costs through pruning and quantization. In this paper, we propose a simple and effective framework called compression-aware vector quantized radiance fields (CA-VQRF), aiming to improve the view synthesis performance of the compressed grid while ensuring the compression ratio. We discover the short-comings of the existing work and propose corresponding solutions. Specifically, we introduce a pruning-aware tuning strategy to shield the impact of the pruned-off voxels in the joint finetuning stage. In addition, a noise-aware tuning strategy is proposed to further compensate for the performance loss caused by vector quantization. Extensive experiments demonstrate that compared with the state-of-the-art compression method, our CA-VQRF can achieve better view synthesis results at the same compression ratio.
Aiming at the problems of slow speed and poor accuracy of traditional millimeter wave sparse imaging, a sparse imaging algorithm based on graph convolution model is proposed from the perspective of sparse signal recov...
详细信息
Aiming at the problems of slow speed and poor accuracy of traditional millimeter wave sparse imaging, a sparse imaging algorithm based on graph convolution model is proposed from the perspective of sparse signal recovery. The graph signal model is constructed by combining the low-rank and piecewise smoothing(LRPS) regular terms, based on which the proximal operator is replaced by the denoising graph convolution network, and the graph convolution sparse reconstruction network LRPS-GCN is constructed, and the recovered target image is obtained by iterating with the optimal non-linear sparse variation. For the proposed algorithm, simulation experiments are carried out using synthetic datasets under different target densities, iteration times and noise environments, and compared with the traditional graph signal reconstruction algorithm and the deep compressed sensing reconstruction algorithm, and then use the measured data with varying degrees of sparsity to validate. The experimental results show that the reconstructed images by this algorithm have better performance in terms of normalised mean square error, target to background ratio, reconstruction time and memory usage. To address the slow speed and poor accuracy issues of traditional millimeter wave sparse imaging, a novel sparse imaging algorithm based on graph convolution model is introduced, utilising low-rank and piecewise smoothing regular terms to construct the graph signal model and employing a denoising graph convolution network for sparse reconstruction. Simulation experiments on synthetic datasets and real measured data demonstrate that this algorithm significantly outperforms traditional and deep compressed sensing reconstruction methods in normalised mean square error, target to background ratio, reconstruction time, and memory usage. image
The process of fusing infrared and visible images necessitates integrating thermal radiation information from infrared image with the edge and texture detail captured by visible images. Most current fusion methods are...
详细信息
Today, in the digital age, high-quality video and pictures are more important than ever in many areas, from medical images and monitoring to fun and multimedia. This study looks into how neural networks can be used in...
详细信息
The Brain-Computer Interface (BCI) has applications in smart homes and healthcare by converting EEG signals into control commands. However, traditional EEG signal decoding methods are affected by individual difference...
详细信息
The Brain-Computer Interface (BCI) has applications in smart homes and healthcare by converting EEG signals into control commands. However, traditional EEG signal decoding methods are affected by individual differences, and although deep learning techniques have made significant breakthroughs, challenges such as high energy consumption and the processing of raw EEG data remain. This paper introduces the Efficient Channel Attention Temporal Convolutional Network (ECA-ATCNet) to enhance feature learning by applying Efficient Channel Attention Convolution (ECA-conv) across spatial and spectral dimensions. The model outperforms state-of-the-art methods in both within-subject and between-subject classification tasks on MI-EEG datasets (BCI-2a and PhysioNet), achieving accuracies of 87.89% and 71.88%, respectively. Additionally, the proposed Spike Integrated Transformer Conversion (SIT-conversion) method, based on Spiking-Softmax, converts the Transformer's self-attention mechanism into Spiking neural Networks (SNNs) in just 12 time steps. The accuracy loss of the converted ECA-ATCNet model is only 0.6% to 0.73%, while its energy consumption is reduced by 52.84% to 53.52%. SIT-conversion enables ultra-low-latency, near-lossless ANN-to-SNN conversion, with SNNs achieving similar accuracy to their ANN counterparts on image datasets. Inference energy consumption is reduced by 18.18% to 45.13%. This method offers a novel approach for low-power, portable BCI applications and contributes to the advancement of energy-efficient SNN algorithms.
Epilepsy is a neurological disorder that affects the normal functioning of the brain. More than 10% of the population across the globe is affected by this disorder. Electroencephalogram (EEG) is prominently employed t...
详细信息
Epilepsy is a neurological disorder that affects the normal functioning of the brain. More than 10% of the population across the globe is affected by this disorder. Electroencephalogram (EEG) is prominently employed to accumulate information about the brain's electrical activity. This study proposes an end-to-end system using a combination of two deep learning models Convolutional neural Networks (CNNs) and Long Short-Term Memory Networks (LSTM) for the classification of EEG signals of epilepsy disordered subjects into three classes, namely preictal, normal, and seizure. The experimental results are obtained using the publicly available and popular Bonn University dataset. In this CNN-LSTM classification model the feature extraction, selection, and classification tasks are performed automatically without using handcrafted feature extraction methods. The performance of the CNN-LSTM model is examined and evaluated in terms of specificity, sensitivity, and accuracy using the tenfold cross-validation approach. The experiments performed and the obtained results show the accuracy of 99.33%, sensitivity of 99.33%, and specificity of 99.66%, respectively. Our results highlight that deep learning methods are best suited for classification in comparison to other existing state-of-the-art methods.
The upcoming Square Kilometre Array Observatory will produce images of neutral hydrogen distribution during the epoch of reionization by observing the corresponding 21-cm signal. However, the 21-cm signal will be subj...
详细信息
The upcoming Square Kilometre Array Observatory will produce images of neutral hydrogen distribution during the epoch of reionization by observing the corresponding 21-cm signal. However, the 21-cm signal will be subject to instrumental limitations such as noise and galactic foreground contamination that pose a challenge for accurate detection. In this study, we present the SegU-Net v2 framework, an enhanced version of our convolutional neural network, built to identify neutral and ionized regions in the 21-cm signal contaminated with foreground emission. We trained our neural network on 21-cm image data processed by a foreground removal method based on Principal Component Analysis achieving an average classification accuracy of 71 per cent between redshift z = 7 and 11. We tested SegU-Net v2 against various foreground removal methods, including Gaussian Process Regression, Polynomial Fitting, and Foreground-Wedge Removal. Results show comparable performance, highlighting SegU-Net v2's independence on these pre-processingmethods. Statistical analysis shows that a perfect classification score with AUC = 95 is possible for 8 < z < 10. While the network prediction lacks the ability to correctly identify ionized regions at higher redshift and differentiate well the few remaining neutral regions at lower redshift due to low contrast between 21-cm signal, noise, and foreground residual in images. Moreover, as the photon sources driving reionization are expected to be located inside ionized regions, we show that SegU-Net v2 can be used to correctly identify and measure the volume of isolated bubbles with V-ion > (10cmpc)(3 )at z > 9, for follow-up studies with infrared/optical telescopes to detect these sources.
暂无评论