Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversar...
ISBN:
(纸本)9798331314385
Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversarial perturbation (UAP) have not been thoroughly investigated yet. In this paper, we propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack. We first divide the output of SAM into foreground and background. Then, we design a shadow target strategy to obtain the semantic blueprint of the image as the attack target. DarkSAM is dedicated to fooling SAM by extracting and destroying crucial object features from images in both spatial and frequency domains. In the spatial domain, we disrupt the semantics of both the foreground and background in the image to confuse SAM. In the frequency domain, we further enhance the attack effectiveness by distorting the high-frequency components (i.e., texture information) of the image. Consequently, with a single UAP, DarkSAM renders SAM incapable of segmenting objects across diverse images with varying prompts. Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM. Our codes are available at: https://***/CGCL-codes/DarkSAM.
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as ...
详细信息
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre-trained encoders designed to function as versatile feature extractors, enabling downstream users to harness the benefits of expansive models with minimal effort through fine-tuning. Nevertheless, recent works have exposed a vulnerability in pre-trained encoders, highlighting their susceptibility to downstream-agnostic adversarial examples (DAEs) meticulously crafted by attackers. The lingering question pertains to the feasibility of fortifying the robustness of downstream models against DAEs, particularly in scenarios where the pre-trained encoders are publicly accessible to the attackers. In this paper, we initially delve into existing defensive mechanisms against adversarial examples within the pre-training paradigm. Our findings reveal that the failure of current defenses stems from the domain shift between pre-training data and downstream tasks, as well as the sensitivity of encoder parameters. In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models. Gen-AF employs a genetic-directed dual-track adversarial fine-tuning strategy in its first stage to effectively inherit the pre-trained encoder. This involves optimizing the pre-trained encoder and classifier separately while incorporating genetic regularization to preserve the model’s topology. In the second stage, Gen-AF assesses the robust sensitivity of each layer and creates a dictionary, based on which the top-k robust redundant layers are selected with the remaining layers held fixed. Upon this foundation, we conduct evolutionary adaptability fine-tuning to further enhance the model’s generalizability. Our extensive experiments, conducted across ten self-supervised training methods and six
Container based microservices have been widely applied to promote the cloud elasticity. The mainstream Docker containers are structured in layers, which are organized in stack with bottom-up dependency. To start a mic...
Container based microservices have been widely applied to promote the cloud elasticity. The mainstream Docker containers are structured in layers, which are organized in stack with bottom-up dependency. To start a microservice, the required layers are pulled from a remote registry and stored on its host server, following the layer dependency order. This incurs long microservice startup time and hinders the performance efficiency. In this paper, we discover that, for the first time, the layer pulling order can be adjusted to accelerate the microservice startup. Specifically, we address the problem on microservice layer pulling orchestration for startup time minimization and prove it as NP-hard. We propose a Longest-chain based Out-of-order layer Pulling Orchestration (LOPO) strategy with low computational complexity and guaranteed approximation ratio. Through extensive real-world trace driven experiments, we verify the efficiency of our LOPO and demonstrate that it reduces the microservice startup time by 22.71% on average in comparison with state-of-the-art solutions.
Deep neural networks (DNNs) have been widely adopted for various mobile inference tasks, yet their ever-increasing computational demands are hindering their deployment on resource-constrained mobile devices. Hybrid de...
详细信息
Traditional unlearnable strategies have been proposed to prevent unauthorized users from training on the 2D image data. With more 3D point cloud data containing sensitivity information, unauthorized usage of this new ...
ISBN:
(纸本)9798331314385
Traditional unlearnable strategies have been proposed to prevent unauthorized users from training on the 2D image data. With more 3D point cloud data containing sensitivity information, unauthorized usage of this new type data has also become a serious concern. To address this, we propose the first integral unlearnable framework for 3D point clouds including two processes: (i) we propose an unlearnable data protection scheme, involving a class-wise setting established by a category-adaptive allocation strategy and multi-transformations assigned to samples; (ii) we propose a data restoration scheme that utilizes class-wise inverse matrix transformation, thus enabling authorized-only training for unlearnable data. This restoration process is a practical issue overlooked in most existing unlearnable literature, i.e., even authorized users struggle to gain knowledge from 3D unlearnable data. Both theoretical and empirical results (including 6 datasets, 16 models, and 2 tasks) demonstrate the effectiveness of our proposed unlearnable framework. Our code is available at https://***/CGCL-codes/UnlearnablePC.
The Go language (Go/Golang) has been attracting increasing attention from the industry over recent years due to its strong concurrency support and ease of deployment. This programming language encourages developers to...
The Go language (Go/Golang) has been attracting increasing attention from the industry over recent years due to its strong concurrency support and ease of deployment. This programming language encourages developers to use channel-based concurrency, which simplifies the development of concurrent programs. Unfortunately, it also introduces new concurrency problems that differ from those caused by the mechanism of shared memory concurrency. However, there are only few works that aim to detect such Go-specific concurrency issues. Even state-of-the-art testing tools will miss critical concurrent bugs that require fine-grained and effective interleaving exploration. This paper presents GoPie, a novel testing approach for detecting Go concurrency bugs through primitive-constrained interleaving exploration. GoPie utilizes execution histories to identify new interleavings instead of relying on exhaustive exploration or random scheduling. To evaluate its performance, we applied GoPie to existing benchmarks and large-scale open-source projects. Results show that GoPie can effectively explore concurrent interleavings and detect significantly more bugs in the benchmark. Furthermore, it uncovered 11 unique previously unknown concurrent bugs, and 9 of which have been confirmed.
The emerging Graph Convolutional Network (GCN) has been widely used in many domains, where it is important to improve the efficiencies of applications by accelerating GCN trainings. Due to the sparsity nature and expl...
详细信息
Federated learning (FL) enables massive clients to collaboratively train a global model by aggregating their local updates without disclosing raw data. Communication has become one of the main bottlenecks that prolong...
Federated learning (FL) enables massive clients to collaboratively train a global model by aggregating their local updates without disclosing raw data. Communication has become one of the main bottlenecks that prolongs the training process, especially under large model variances due to skewed data distributions. Existing efforts mainly focus on either single momentum-based gradient descent, or random client selection for potential variance reduction, yet both often lead to poor model accuracy and system efficiency. In this paper, we propose FedMoS, a communication-efficient FL framework with coupled double momentum-based update and adaptive client selection, to jointly mitigate the intrinsic variance. Specifically, FedMoS maintains customized momentum buffers on both server and client sides, which track global and local update directions to alleviate the model discrepancy. Taking momentum results as input, we design an adaptive selection scheme to provide a proper client representation during FL aggregation. By optimally calibrating clients’ selection probabilities, we can effectively reduce the sampling variance, while ensuring unbiased aggregation. Through a rigid analysis, we show that FedMoS can attain the theoretically optimal ${{\mathcal{O}}}\left({{T^{ - 2/3}}}\right)$ convergence rate. Extensive experiments using real-world datasets further validate the superiority of FedMoS, with 58%-87% communication reduction for achieving the same target performance compared to state-of-the-art techniques.
Approximate nearest neighbor search (ANNS) has emerged as a crucial component of database and AI infrastructure. Ever-increasing vector datasets pose significant challenges in terms of performance, cost, and accuracy ...
ISBN:
(纸本)9781939133458
Approximate nearest neighbor search (ANNS) has emerged as a crucial component of database and AI infrastructure. Ever-increasing vector datasets pose significant challenges in terms of performance, cost, and accuracy for ANNS services. None of modern ANNS systems can address these issues simultaneously. In this paper, we present Fusion-ANNS, a high-throughput, low-latency, cost-efficient, and high-accuracy ANNS system for billion-scale datasets using SSDs and only one entry-level GPU. The key idea of Fusion-ANNS lies in CPU/GPU collaborative filtering and reranking mechanisms, which significantly reduce I/O operations across CPUs, GPU, and SSDs to break through the I/O performance bottleneck. Specifically, we propose three novel designs: (1) multi-tiered indexing to avoid data swapping between CPUs and GPU, (2) heuristic re-ranking to eliminate unnecessary I/Os and computations while guaranteeing high accuracy, and (3) redundant-aware I/O deduplication to further improve I/O efficiency. We implement FusionANNS and compare it with the state-of-the-art SSD-based ANNS system-SPANN and GPU-accelerated in-memory ANNS system-RUMMY. Experimental results show that FusionANNS achieves 1) 9.4-13.1× higher query per second (QPS) and 5.7-8.8× higher cost efficiency compared with SPANN; 2) and 2-4.9× higher QPS and 2.3-6.8× higher cost efficiency compared with RUMMY, while guaranteeing low latency and high accuracy.
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection,...
详细信息
暂无评论