User-generated content is daily produced in social media, as such user interest summarization is critical to distill salient information from massive information. While the interested messages (e.g., tags or posts) fr...
详细信息
RGB-Thermal Salient Object Detection (RGBT-SOD) plays a critical role in complex scene recognition applications, such as autonomous driving. However, security research in this domain is still in its infancy. This pape...
详细信息
Knowledge hypergraphs generalize knowledge graphs using hyperedges to connect multiple entities and depict complicated relations. Existing methods either transform hyperedges into an easier-to-handle set of binary rel...
详细信息
Integrating personalization into federated learning is crucial for addressing data heterogeneity and surpassing the limitations of a single aggregated model. Personalized federated learning excels at capturing inter-c...
详细信息
ISBN:
(数字)9798350359312
ISBN:
(纸本)9798350359329
Integrating personalization into federated learning is crucial for addressing data heterogeneity and surpassing the limitations of a single aggregated model. Personalized federated learning excels at capturing inter-client similarities and meeting diverse client needs through custom-made models. However, even with personalized approaches, it’s essential to aggregate knowledge among clients to ensure universal benefits. This paper proposes Federated Dual Objectives and Dual Models (FedDodm), a novel approach that employs two independent models to separately address explicit personalization and implicit generalization objectives in personalized federated learning. By treating these objectives as distinct loss functions and training models accordingly, we achieve a balance between the two through a fusion method. Extensive experiments across various models and learning tasks demonstrate that FedDodm outperforms state-of-the-art federated learning approaches, marking a significant advancement in effectively integrating personalized and generalized knowledge.
Node Importance Estimation (NIE) is a task that quantifies the importance of node in a graph. Recent research has investigated to exploit various information from Knowledge Graphs (KGs) to estimate node importance sco...
详细信息
The nonreciprocity of energy transfer is constructed in a nonlinear asymmetric oscillator system that comprises two nonlinear oscillators with different parameters placed between two identical linear *** slow-flow equ...
详细信息
The nonreciprocity of energy transfer is constructed in a nonlinear asymmetric oscillator system that comprises two nonlinear oscillators with different parameters placed between two identical linear *** slow-flow equation of the system is derived by the complexification-averaging *** semi-analytical solutions to this equation are obtained by the least squares method,which are compared with the numerical solutions obtained by the Runge-Kutta *** distribution of the average energy in the system is studied under periodic and chaotic vibration states,and the energy transfer along two opposite directions is *** effect of the excitation amplitude on the nonreciprocity of the system producing the periodic responses is analyzed,where a three-stage energy transfer phenomenon is *** the first stage,the energy transfer along the two opposite directions is approximately equal,whereas in the second stage,the asymmetric energy transfer is *** energy transfer is also asymmetric in the third stage,but the direction is reversed compared with the second ***,the excitation amplitude for exciting the bifurcation also shows an asymmetric *** vibrations are generated around the resonant frequency,irrespective of which linear oscillator is *** excitation threshold of these chaotic vibrations is dependent on the linear oscillator that is being *** addition,the difference between the energy transfer in the two opposite directions is used to further analyze the nonreciprocity in the *** results show that the nonreciprocity significantly depends on the excitation frequency and the excitation amplitude.
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as ...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as versatile feature extractors, enabling downstream users to harness the benefits of expansive models with minimal effort through fine-tuning. Nevertheless, recent works have exposed a vulnerability in pre-trained encoders, highlighting their susceptibility to downstream-agnostic adversarial examples (DAEs) meticulously crafted by attackers. The lingering question pertains to the feasibility of fortifying the robustness of downstream models against DAEs, particularly in scenarios where the pre-trained encoders are publicly accessible to the *** this paper, we initially delve into existing defensive mechanisms against adversarial examples within the pre-training paradigm. Our findings reveal that the failure of current defenses stems from the domain shift between pre-training data and downstream tasks, as well as the sensitivity of encoder parameters. In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models. Gen-AF employs a genetic-directed dual-track adversarial fine-tuning strategy in its first stage to effectively inherit the pre-trained encoder. This involves optimizing the pre-trained encoder and classifier separately while incorporating genetic regularization to preserve the model’s topology. In the second stage, Gen-AF assesses the robust sensitivity of each layer and creates a dictionary, based on which the top-k robust redundant layers are selected with the remaining layers held fixed. Upon this foundation, we conduct evolutionary adaptability fine-tuning to further enhance the model’s generalizability. Our extensive experiments, conducted across ten self-supervised training methods and six d
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bun...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable adversarial examples, many of these findings fail to be well explained and even lead to confusing or inconsistent advice for practical *** this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing "little robustness" phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates for transfer attacks, we attribute it to a trade-off between two dominant factors: model smoothness and gradient similarity. Our research focuses on their joint effects on transferability, rather than demonstrating the separate relationships alone. Through a combination of theoretical and empirical analyses, we hypothesize that the data distribution shift induced by off-manifold samples in adversarial training is the reason that impairs gradient *** on these insights, we further explore the impacts of prevalent data augmentation and gradient regularization on transferability and analyze how the trade-off manifests in various training methods, thus building a comprehensive blueprint for the regulation mechanisms behind transferability. Finally, we provide a general route for constructing superior surrogates to boost transferability, which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in...
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in parallel execution capabilities, while the latter enjoys low dependence on edge computing resources and strong privacy protection. Split federated learning (SFL) combines the strengths of both FL and SL, making it one of the most popular distributed architectures. Furthermore, a recent study has claimed that SFL exhibits robustness against poisoning attacks, with a fivefold improvement compared to FL in terms of *** this paper, we present a novel poisoning attack known as $\color{Fuchsia} {{\text{MISA}}}$. It poisons both the top and bottom models, causing a misalignment in the global model, ultimately leading to a drastic accuracy collapse. This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks. Extensive experiments demonstrate that our proposed MISA poses a significant threat to the availability of SFL, underscoring the imperative for academia and industry to accord this matter due attention.
With the marriage of federated machine learning and recommender systems for privacy-aware preference modeling and personalization, there comes a new research branch called federated recommender systems aiming to build...
详细信息
暂无评论