Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in...
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in parallel execution capabilities, while the latter enjoys low dependence on edge computing resources and strong privacy protection. Split federated learning (SFL) combines the strengths of both FL and SL, making it one of the most popular distributed architectures. Furthermore, a recent study has claimed that SFL exhibits robustness against poisoning attacks, with a fivefold improvement compared to FL in terms of *** this paper, we present a novel poisoning attack known as $\color{Fuchsia} {{\text{MISA}}}$. It poisons both the top and bottom models, causing a misalignment in the global model, ultimately leading to a drastic accuracy collapse. This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks. Extensive experiments demonstrate that our proposed MISA poses a significant threat to the availability of SFL, underscoring the imperative for academia and industry to accord this matter due attention.
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bun...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable adversarial examples, many of these findings fail to be well explained and even lead to confusing or inconsistent advice for practical *** this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing "little robustness" phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates for transfer attacks, we attribute it to a trade-off between two dominant factors: model smoothness and gradient similarity. Our research focuses on their joint effects on transferability, rather than demonstrating the separate relationships alone. Through a combination of theoretical and empirical analyses, we hypothesize that the data distribution shift induced by off-manifold samples in adversarial training is the reason that impairs gradient *** on these insights, we further explore the impacts of prevalent data augmentation and gradient regularization on transferability and analyze how the trade-off manifests in various training methods, thus building a comprehensive blueprint for the regulation mechanisms behind transferability. Finally, we provide a general route for constructing superior surrogates to boost transferability, which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the
Traditional unlearnable strategies have been proposed to prevent unauthorized users from training on the 2D image data. With more 3D point cloud data containing sensitivity information, unauthorized usage of this new ...
The fractional Schrödinger equation (FSE) on the real line arises in a broad range of physical settings and their numerical simulation is challenging due to the nonlocal nature and the power law decay of the solu...
详细信息
Automatic speech verification (ASV) authenticates individuals based on distinct vocal patterns, playing a pivotal role in many applications such as voice-based unlocking systems for devices. The ASV system comprises t...
Automatic speech verification (ASV) authenticates individuals based on distinct vocal patterns, playing a pivotal role in many applications such as voice-based unlocking systems for devices. The ASV system comprises three stages: training, registration, and validation. The model refines using voice data in training, extracts vocal features in registration, and contrasts these with speech patterns in validation. Modern ASV models, primarily grounded in DNN architectures, require extensive data for training. Federated learning (FL) fosters model-sharing across multiple clients while ensuring data privacy. Due to its open architecture, FL is vulnerable to backdoor attacks. However, training a stealthy backdoor attack in FL presents challenges, including diminished attack generalization owing to data heterogeneity, and conspicuous triggers that render them easily detectable. In this paper, we propose a Federated Stealthy Backdoor Attack method ($FedSBA$). FedSBA aims to improve the attack model’s generalization, enhance its persistence, and elude anomaly detection under the heterogeneous data distribution. FedSBA constructs an attack model based on a personalized transformer and encompasses a stealthy trigger. Moreover, we also propose a defensive strategy that utilizes an adaptive weight aggregation scheme. The stealthiness and effectiveness of FedSBA are demonstrated by exhibiting superior performance in comparison to previous works.
Voice Liveness Detection (VLD) aims to protect speaker authentication from speech spoofing by determining whether speeches come from live speakers or loudspeakers. Previous methods mainly focus on their differences at...
详细信息
ISBN:
(数字)9798331522360
ISBN:
(纸本)9798331522377
Voice Liveness Detection (VLD) aims to protect speaker authentication from speech spoofing by determining whether speeches come from live speakers or loudspeakers. Previous methods mainly focus on their differences at the signal level. In this paper, we propose the first VLD that uses the human auditory feedback mechanism (i.e., the Lombard effect), called Lombard-VLD. The key idea is that live speakers can physiologically and involuntarily adjust their speaking patterns in a noisy background but loudspeakers cannot. Moreover, we design a reference-based dual input mode and a differential SE-ResBlock to model the acoustic differences caused by the Lombard effect. Experimental results show that Lombard-VLD achieves 0% and 0.24% EER in two datasets, outperforming the state-of-the-art methods. It is robust to various environmental factors, including different distances, postures of the speaker, and environmental noise, with an average accuracy of over 98.51%. It also has a good generalization to unseen speakers, genders, and datasets, with EER lower than 2.68%, 3.44%, and 7.32%, respectively. This work shows the advantages of the Lombard effect in VLD, which has fewer user limitations and better detection performance.
Over the past decade, various methods for detecting side-channel leakage have been proposed and proven to be effective against CPU side-channel attacks. These methods are valuable in assisting developers to identify a...
详细信息
ISBN:
(数字)9798350341058
ISBN:
(纸本)9798350341065
Over the past decade, various methods for detecting side-channel leakage have been proposed and proven to be effective against CPU side-channel attacks. These methods are valuable in assisting developers to identify and patch side-channel vulnerabilities. Nevertheless, recent research has revealed the feasibility of exploiting side-channel vulnerabilities to steal sensitive information from GPU applications, which are beyond the reach of previous side-channel detection methods. Therefore, in this paper, we conduct an in-depth examination of various GPU features and present Owl, a novel side-channel detection tool targeting CUDA applications on NVIDIA GPUs. Owl is designed to detect and locate side-channel leakage in various types of CUDA applications. When tracking the execution of CUDA applications, we design a hierarchical tracing scheme and extend the A-DCFG (Attributed Dynamic Control Flow Graph) to address the massively parallel execution in CUDA, ensuring Owl's detection scalability. After completing the initial assessment and filtering, we conduct statistical tests on the differences in program traces to determine whether they are indeed caused by input variations, subsequently facilitating the positioning of side-channel leaks. We evaluate Owl's capability to detect side-channel leaks by testing it on Libgpucrypto, PyTorch, and nvJPEG. Meanwhile, we verify that our solution effectively handles a large number of threads. Owl has successfully identified hundreds of leaks within these applications. To the best of our knowledge, we are the first to implement side-channel leakage detection for general CUDA applications.
In this work, we consider a general consistent and conservative phase-field model for the incompressible two-phase flows. In this model, not only the Cahn-Hilliard or Allen-Cahn equation can be adopted, but also the m...
详细信息
In this work, we consider a general consistent and conservative phase-field model for the incompressible two-phase flows. In this model, not only the Cahn-Hilliard or Allen-Cahn equation can be adopted, but also the mass and the momentum fluxes in the Navier-Stokes equations are reformulated such that the consistency of reduction, consistency of mass and momentum transport, and the consistency of mass conservation are satisfied. We further develop a lattice Boltzmann (LB) method, and show that through the direct Taylor expansion, the present LB method can correctly recover the consistent and conservative phase-field model. Additionally, if the divergence of the extra momentum flux is seen as a force term, the extra force in the present LB method would include another term which has not been considered in the previous LB methods. To quantitatively evaluate the incompressibility and the consistency of the mass conservation, two statistical variables are introduced in the study of the deformation of a square droplet, and the results show that the present LB method is more accurate. The layered Poiseuille flow and a droplet spreading on an ideal wall are further investigated, and the numerical results are in good agreement with the analytical solutions. Finally, the problems of the Rayleigh-Taylor instability, a single rising bubble, and the dam break with the high Reynolds numbers and/or large density ratios are studied, and it is found that the present consistent and conservative LB method is robust for such complex two-phase flows.
As deep neural networks (DNNs) are widely applied in the physical world, many researches are focusing on physical-world adversarial examples (PAEs), which introduce perturbations to inputs and cause the model's in...
详细信息
In this paper, we developed a coupled diffuse-interface lattice Boltzmann method (DI-LBM) to study the transport of a charged particle in the Poiseuille flow, which is governed by the Navier-Stokes equations for fluid...
详细信息
In this paper, we developed a coupled diffuse-interface lattice Boltzmann method (DI-LBM) to study the transport of a charged particle in the Poiseuille flow, which is governed by the Navier-Stokes equations for fluid field and the Poisson-Boltzmann equation for electric potential field. We first validated the present DI-LBM through some classical benchmark problems, and then investigated the effect of electric field on the lateral migration of the particle in the Poiseuille flow. The numerical results show that the electric field has a significant influence on the particle migration. When an electric field in the vertical direction is applied to the charged particle initially located above the centerline of the channel, the equilibrium position of the particle would drop suddenly as the electric field is larger than a critical value. This is caused by the wall repulsion due to lubrication, the inertial lift related to shear slip, the lift owing to particle rotation, the lift due to the curvature of the undisturbed velocity profile, and the electric force. On the other hand, when an electric field in the horizontal direction is adopted, the equilibrium position of the particle would move toward the centerline of the channel with the increase of the electric field.
暂无评论