Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in...
Federated learning (FL) and split learning (SL) are prevailing distributed paradigms in recent years. They both enable shared global model training while keeping data localized on users’ devices. The former excels in parallel execution capabilities, while the latter enjoys low dependence on edge computing resources and strong privacy protection. Split federated learning (SFL) combines the strengths of both FL and SL, making it one of the most popular distributed architectures. Furthermore, a recent study has claimed that SFL exhibits robustness against poisoning attacks, with a fivefold improvement compared to FL in terms of *** this paper, we present a novel poisoning attack known as $\color{Fuchsia} {{\text{MISA}}}$. It poisons both the top and bottom models, causing a misalignment in the global model, ultimately leading to a drastic accuracy collapse. This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks. Extensive experiments demonstrate that our proposed MISA poses a significant threat to the availability of SFL, underscoring the imperative for academia and industry to accord this matter due attention.
Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversar...
详细信息
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bun...
详细信息
ISBN:
(数字)9798350331301
ISBN:
(纸本)9798350331318
Adversarial examples for deep neural networks (DNNs) are transferable: examples that successfully fool one white-box surrogate model can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable adversarial examples, many of these findings fail to be well explained and even lead to confusing or inconsistent advice for practical *** this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing "little robustness" phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates for transfer attacks, we attribute it to a trade-off between two dominant factors: model smoothness and gradient similarity. Our research focuses on their joint effects on transferability, rather than demonstrating the separate relationships alone. Through a combination of theoretical and empirical analyses, we hypothesize that the data distribution shift induced by off-manifold samples in adversarial training is the reason that impairs gradient *** on these insights, we further explore the impacts of prevalent data augmentation and gradient regularization on transferability and analyze how the trade-off manifests in various training methods, thus building a comprehensive blueprint for the regulation mechanisms behind transferability. Finally, we provide a general route for constructing superior surrogates to boost transferability, which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the
Token dropping is a recently-proposed strategy to speed up the pretraining of masked language models, such as BERT, by skipping the computation of a subset of the input tokens at several middle layers. It can effectiv...
详细信息
With the rapid development of industrial Internet services, the era of the Internet of everything has arrived, which puts forward higher requirements for the network security of the industrial Internet. In order to gu...
With the rapid development of industrial Internet services, the era of the Internet of everything has arrived, which puts forward higher requirements for the network security of the industrial Internet. In order to guarantee the safe and trustworthy flow of industrial Internet data, this paper proposes a chain network connector architecture that integrates blockchain technology and industrial Internet technology, which utilizes the characteristics of blockchain traceability, tampering, collective maintenance, and openness and transparency to guarantee that the identification data in the industrial Internet is more secure and trustworthy. In addition, the architecture adopts quantum identity authentication technology in the identity authentication part to reduce the problem of illegal user access due to identity forgery. Meanwhile, for the problem of low efficiency of large-capacity and high-concurrency data processing generated by actual enterprises in the production process, this paper proposes to use analytical hierarchy process and Hungarian algorithm to co-optimize the chain network connector in order to make the resource allocation more reasonable and the transmission efficiency higher. The experimental results show that different algorithms are used for task allocation in different scenarios to maximize the data transmission efficiency, and also greatly improve the security of industrial Internet identification data.
Indoor positioning is a thriving research area, which is slowly gaining market momentum. Its applications are mostly customized, ad hoc installations;ubiquitous applications analogous to Global Navigation Satellite Sy...
详细信息
Embodied AI represents systems where AI is integrated into physical entities. Large Language Model (LLM), which exhibits powerful language understanding abilities, has been extensively employed in embodied AI by facil...
详细信息
End-to-end text spotting aims to integrate scene text detection and recognition into a unified framework. Dealing with the relationship between the two sub-tasks plays a pivotal role in designing effective spotters. A...
详细信息
End-to-end text spotting aims to integrate scene text detection and recognition into a unified framework. Dealing with the relationship between the two sub-tasks plays a pivotal role in designing effective spotters. Although Transformer-based methods eliminate the heuristic post-processing, they still suffer from the synergy issue between the sub-tasks and low training efficiency. Besides, they overlook the exploring on multilingual text spotting which requires an extra script identification task. In this paper, we present DeepSolo++, a simple DETR-like baseline that lets a single Decoder with Explicit Points Solo for text detection, recognition, and script identification simultaneously. Technically, for each text instance, we represent the character sequence as ordered points and model them with learnable explicit point queries. After passing a single decoder, the point queries have encoded requisite text semantics and locations, thus can be further decoded to the center line, boundary, script, and confidence of text via very simple prediction heads in parallel. Furthermore, we show the surprisingly good extensibility of our method, in terms of character class, language type, and task. On the one hand, our method not only performs well in English scenes but also masters the transcription with complex font structure and a thousand-level character classes, such as Chinese. On the other hand, our DeepSolo++ achieves better performance on the additionally introduced script identification task with a simpler training pipeline compared with previous methods. Extensive experiments on public benchmarks demonstrate that our simple approach achieves better training efficiency compared with Transformer-based models and outperforms the previous state-of-the-art. For example, on ICDAR 2019 ReCTS for Chinese text, our method boosts the 1-NED metric to a new record of 78.3%. On ICDAR 2019 MLT, DeepSolo++ achieves absolute 5.5% H-mean and 8.0% AP improvements on joint detection an
To ensure the stability and reliability of service quality, large Internet companies need to closely monitor various KPIs (key Performance Indicators, such as network throughput, CPU usages) and trigger timely trouble...
详细信息
To ensure the stability and reliability of service quality, large Internet companies need to closely monitor various KPIs (key Performance Indicators, such as network throughput, CPU usages) and trigger timely troubleshooting or mitigation when any anomaly occurs. However, the diversity and complexity of anomalies bring great challenges to this work, especially when there is no manual label and low delay is required. In this paper, we propose HS-VAE (Highly Sensitive Variational Auto-Encoders), an unsupervised, robust algorithm based on CVAE (Conditional Variational Auto-Encoders) with high sensitivity to anomalies for KPIs, which contains mainly 3 parts: a simple but important data filter before training, improved conditional VAE with two dropout layers and adjusted anomaly detection method based on reconstruction probability. Our experiments using real-world data show that, HS-VAE's best F1-score ranges from 0.91 to 0.98. In addition, HS- VAE is excellent in sensitivity to anomaly and works well even with low latency requirements.
The Segment Anything Model (SAM), a profound vision foundation model pretrained on a large-scale dataset, breaks the boundaries of general segmentation and sparks various downstream applications. This paper introduces...
详细信息
暂无评论