This paper first introduces the SIMD (single instruction multiple data) extension technology and presents three ways to use SIMD instructions. It is considered that calling the third party library, which is optimized ...
详细信息
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users' devices. The former excels...
Deep learning has been widely used in source code classification tasks, such as code classification according to their functionalities, code authorship attribution, and vulnerability detection. Unfortunately, the blac...
详细信息
In the practice of dentistry, oral dental CT images are frequently used to assist doctors in diagnosis. Filtered back projection (FBP) technique is widely employed in practice for the reconstruction of CT images obtai...
In the practice of dentistry, oral dental CT images are frequently used to assist doctors in diagnosis. Filtered back projection (FBP) technique is widely employed in practice for the reconstruction of CT images obtained from X-ray calculations. However, when metal objects occur in a patient’s oral cavity, the CT images would show density discontinuities due to the metals’ “X-ray absorption coefficient is much larger than human tissues. When the FBP algorithm is applied to CT images with metals, severe metal artifacts would be obtained, which significantly reconstructed images. Therefore, metal artifact reduction (MAR) work is becoming an important problem in dentistry image processing. In this paper, we propose a novel iterative sinogram metal artifact reduction model (IS-MARM) to solve the problem. Inspired by the Diffusion model, we propose a new method to reduce metal artifacts and interpolate new data in sinogram of dentistry images iteratively. This approach reduces the difficulty of model learning and achieves good results. Secondly, we proposed a new simple method of iterative data generating to simulate real-world metals in CT sinogram images. Finally, we have demonstrated the effectiveness of our method through experiments on dental CT MAR work.
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as ...
详细信息
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as versatile feature extractors, enabling downstream users to harness the benefits of expansive models with minimal effort through fine-tuning. Nevertheless, recent works have exposed a vulnerability in pre-trained encoders, highlighting their susceptibility to downstream-agnostic adversarial examples (DAEs) meticulously crafted by attackers. The lingering question pertains to the feasibility of fortifying the robustness of downstream models against DAEs, particularly in scenarios where the pre-trained encoders are publicly accessible to the attackers. In this paper, we initially delve into existing defensive mechanisms against adversarial examples within the pre-training paradigm. Our findings reveal that the failure of current defenses stems from the domain shift between pre-training data and downstream tasks, as well as the sensitivity of encoder parameters. In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models. Gen-AF employs a genetic-directed dual-track adversarial fine-tuning strategy in its first stage to effectively inherit the pre-trained encoder. This involves optimizing the pre-trained encoder and classifier separately while incorporating genetic regularization to preserve the model’s topology. In the second stage, Gen-AF assesses the robust sensitivity of each layer and creates a dictionary, based on which the top-k robust redundant layers are selected with the remaining layers held fixed. Upon this foundation, we conduct evolutionary adaptability fine-tuning to further enhance the model’s generalizability. Our extensive experiments, conducted across ten self-supervised training methods and six
Deep neural networks are proven to be vulnerable to backdoor attacks. Detecting the trigger samples during the inference stage, i.e., the test-time trigger sample detection, can prevent the backdoor from being trigger...
详细信息
Point cloud completion, as the upstream procedure of 3D recognition and segmentation, has become an essential part of many tasks such as navigation and scene understanding. While various point cloud completion models ...
详细信息
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an encoder which can be used as a general-purpose feature extractor, such that downstream users only need to perform fine-tuning oper...
详细信息
The variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data, has captured more attention recently in graph embedding area. However, it has been faced up with the challenge ...
详细信息
ISBN:
(纸本)9781665418164
The variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data, has captured more attention recently in graph embedding area. However, it has been faced up with the challenge of KL vanishing, which will converge to local optimum and make the graph embedding unavailable for downstream tasks like link prediction. This paper proposes a novel variational graph autoencoder framework to achieve more effective graph embedding. Firstly, we introduce batch normalization to make sure the KL distribution consistent with the whole dataset by keeping its expectation positive, thus avoiding posteriori collapse. In addition, we invite residual connection and adversarial network to simultaneously embed the topology information and content information into graph representation stably, enhancing the expressive ability of latent vector. Finally, link prediction experiments on three citation datasets demonstrate that the AUC scores of our algorithm are higher than 92% and the average accuracies are higher than 93%, which is competitive comparing with the state-of-the-art variational graph autoencoders.
暂无评论