With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as ...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as versatile feature extractors, enabling downstream users to harness the benefits of expansive models with minimal effort through fine-tuning. Nevertheless, recent works have exposed a vulnerability in pre-trained encoders, highlighting their susceptibility to downstream-agnostic adversarial examples (DAEs) meticulously crafted by attackers. The lingering question pertains to the feasibility of fortifying the robustness of downstream models against DAEs, particularly in scenarios where the pre-trained encoders are publicly accessible to the *** this paper, we initially delve into existing defensive mechanisms against adversarial examples within the pre-training paradigm. Our findings reveal that the failure of current defenses stems from the domain shift between pre-training data and downstream tasks, as well as the sensitivity of encoder parameters. In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models. Gen-AF employs a genetic-directed dual-track adversarial fine-tuning strategy in its first stage to effectively inherit the pre-trained encoder. This involves optimizing the pre-trained encoder and classifier separately while incorporating genetic regularization to preserve the model’s topology. In the second stage, Gen-AF assesses the robust sensitivity of each layer and creates a dictionary, based on which the top-k robust redundant layers are selected with the remaining layers held fixed. Upon this foundation, we conduct evolutionary adaptability fine-tuning to further enhance the model’s generalizability. Our extensive experiments, conducted across ten self-supervised training methods and six d
Traditional unlearnable strategies have been proposed to prevent unauthorized users from training on the 2D image data. With more 3D point cloud data containing sensitivity information, unauthorized usage of this new ...
This paper explores the evolution of geoscientific inquiry,tracing the progression from traditional physics-based models to modern data-driven approaches facilitated by significant advancements in artificial intellige...
详细信息
This paper explores the evolution of geoscientific inquiry,tracing the progression from traditional physics-based models to modern data-driven approaches facilitated by significant advancements in artificial intelligence(AI)and data collection *** models,which are grounded in physical and numerical frameworks,provide robust explanations by explicitly reconstructing underlying physical ***,their limitations in comprehensively capturing Earth’s complexities and uncertainties pose challenges in optimization and real-world *** contrast,contemporary data-driven models,particularly those utilizing machine learning(ML)and deep learning(DL),leverage extensive geoscience data to glean insights without requiring exhaustive theoretical *** techniques have shown promise in addressing Earth science-related ***,challenges such as data scarcity,computational demands,data privacy concerns,and the“black-box”nature of AI models hinder their seamless integration into *** integration of physics-based and data-driven methodologies into hybrid models presents an alternative *** models,which incorporate domain knowledge to guide AI methodologies,demonstrate enhanced efficiency and performance with reduced training data *** review provides a comprehensive overview of geoscientific research paradigms,emphasizing untapped opportunities at the intersection of advanced AI techniques and *** examines major methodologies,showcases advances in large-scale models,and discusses the challenges and prospects that will shape the future landscape of AI in *** paper outlines a dynamic field ripe with possibilities,poised to unlock new understandings of Earth’s complexities and further advance geoscience exploration.
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in...
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in parallel execution capabilities, while the latter enjoys low dependence on edge computing resources and strong privacy protection. Split federated learning (SFL) combines the strengths of both FL and SL, making it one of the most popular distributed architectures. Furthermore, a recent study has claimed that SFL exhibits robustness against poisoning attacks, with a fivefold improvement compared to FL in terms of *** this paper, we present a novel poisoning attack known as $\color{Fuchsia} {{\text{MISA}}}$. It poisons both the top and bottom models, causing a misalignment in the global model, ultimately leading to a drastic accuracy collapse. This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks. Extensive experiments demonstrate that our proposed MISA poses a significant threat to the availability of SFL, underscoring the imperative for academia and industry to accord this matter due attention.
3D anomaly detection (AD) is prominent but difficult due to lacking a unified theoretical foundation for preprocessing design. We establish the Fence Theorem, formalizing preprocessing as a dual-objective semantic iso...
详细信息
Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversar...
详细信息
In recent years, microfluidic biochips have been widely applied in various fields of human society. The emergence of distributed channel-storage architecture enables fluid to be directly cached within the flow channel...
详细信息
ISBN:
(数字)9798350309270
ISBN:
(纸本)9798350309287
In recent years, microfluidic biochips have been widely applied in various fields of human society. The emergence of distributed channel-storage architecture enables fluid to be directly cached within the flow channels without being transported to the dedicated storage. However, the volume of the flow channels is limited, it is important to carefully consider the fluid volume to be cached to ensure it can be accommodated within the appropriate flow channels. In this paper, we propose a high-level synthesis method considering actual volume management and channel storage through integer linear programming, providing a rational analysis of fluid caching and the temporal relationships between different fluid-handling tasks. Experimental results conducted on multiple benchmark tests demonstrate the effectiveness of the proposed method in reducing the completion time of bioassay, the number of caches, the maximum fluid caching volume, and the number of fluid-handling tasks.
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bun...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable adversarial examples, many of these findings fail to be well explained and even lead to confusing or inconsistent advice for practical *** this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing "little robustness" phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates for transfer attacks, we attribute it to a trade-off between two dominant factors: model smoothness and gradient similarity. Our research focuses on their joint effects on transferability, rather than demonstrating the separate relationships alone. Through a combination of theoretical and empirical analyses, we hypothesize that the data distribution shift induced by off-manifold samples in adversarial training is the reason that impairs gradient *** on these insights, we further explore the impacts of prevalent data augmentation and gradient regularization on transferability and analyze how the trade-off manifests in various training methods, thus building a comprehensive blueprint for the regulation mechanisms behind transferability. Finally, we provide a general route for constructing superior surrogates to boost transferability, which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the
Unmanned Aerial Vehicles (UAVs) possess high mobility and flexible deployment capabilities, prompting the development of UAVs for various application scenarios within the Internet of Things (IoT). The unique capabilit...
详细信息
The widespread use of deep neural networks (DNNs) in image classification sofwares underlines the importance of the robustness. Researchers have proposed sparse adversarial attack methods for generating test cases, wh...
详细信息
ISBN:
(数字)9798350376968
ISBN:
(纸本)9798350376975
The widespread use of deep neural networks (DNNs) in image classification sofwares underlines the importance of the robustness. Researchers have proposed sparse adversarial attack methods for generating test cases, which add pixel-level perturbations to construct the test case to mislead the target model. However, the existing methods have certain limitations, such as high time cost, poor flexibility, and poor quality of the test cases. To address these issues, we propose a gradient-guided test case generation method (GGTM) to evaluate the robustness of image classification software. The method firstly identifies the key region in the image based on the gradient-weighted class activation mapping (Grad-CAM) and the prediction confidence of the target model on the input image. In the key region, it selects a set of pixels as candidate perturbation pixels according to the gradient value and the change of loss function. Then perturbations are added to the candidate perturbation pixels after applying a random dropout strategy to reduce some candidate perturbation pixels which is used to avoid local optimum. For the initially constructed test case which can mislead the target model, after removing redundant and unimportant perturbations, perturbations are re-added to optimize the test case. Experiments show the effectiveness of GGTM, which achieves 100% attack success rate. And the test cases generated by GGTM have the best perturbation sparsity. Furthermore, compared with the baseline method SparseAG which achieves optimal perturbation sparsity among the baseline methods, GGTM significantly improves the efficiency.
暂无评论