Multiple Birth Support Vector Machine (MBSVM) is widely used in various engineering fields due to its fast learning efficiency. However, MBSVM does not consider the prior structure information of samples when construc...
详细信息
作者:
Jingyuan XuWeiwei LiuSchool of Computer Science
Wuhan University and National Engineering Research Center for Multimedia Software Wuhan University and Institute of Artificial Intelligence Wuhan University and Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University
This paper considers the following question: Given the number of classes m, the number of robust accuracy queries k, and the number of test examples in the dataset n, how much can adaptive algorithms robustly overfit ...
This paper considers the following question: Given the number of classes m, the number of robust accuracy queries k, and the number of test examples in the dataset n, how much can adaptive algorithms robustly overfit the test dataset? We solve this problem by equivalently giving near-matching upper and lower bounds of the robust overfitting bias in multiclass classification problems.
作者:
Yanbo ChenWeiwei LiuSchool of Computer Science
Wuhan University and National Engineering Research Center for Multimedia Software Wuhan University and Institute of Artificial Intelligence Wuhan University and Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University
Transfer-based attacks [1] are a practical method of black-box adversarial attacks in which the attacker aims to craft adversarial examples from a source model that is transferable to the target model. Many empirical ...
Transfer-based attacks [1] are a practical method of black-box adversarial attacks in which the attacker aims to craft adversarial examples from a source model that is transferable to the target model. Many empirical works [2-6] have tried to explain the transferability of adversarial examples from different angles. However, these works only provide ad hoc explanations without quantitative analyses. The theory behind transfer-based attacks remains a *** paper studies transfer-based attacks under a unified theoretical framework. We propose an explanatory model, called the manifold attack model, that formalizes popular beliefs and explains the existing empirical results. Our model explains why adversarial examples are transferable even when the source model is inaccurate as observed in Papernot et al. [7]. Moreover, our model implies that the existence of transferable adversarial examples depends on the "curvature" of the data manifold, which further explains why the success rates of transfer-based attacks are hard to improve. We also discuss our model's expressive power and applicability.
In this paper, a mathematical model is established for the three-dimensional packing problem. This model is a multi-objective optimization problem which considers two factors: volume utilization ratio and load utiliza...
详细信息
Liveness detection is a part of living biometric identification. While the face recognition system is promoted, it is also vulnerable to deceived and attacked from fake faces. Face liveness detection in traditional me...
详细信息
Image fusion combines the complementary traits of source images into a single output, enhancing both human visual observation and machine vision perception. The existing fusion algorithms typically prioritize visual e...
详细信息
ISBN:
(数字)9798350354690
ISBN:
(纸本)9798350354706
Image fusion combines the complementary traits of source images into a single output, enhancing both human visual observation and machine vision perception. The existing fusion algorithms typically prioritize visual enhancement, often overlooking the real-time needs for critical surveillance applications. To address these real-time deployment needs, we present a compact fusion network for combining infrared and visible image representations, named Light-weight Fusion (LightFusion). This network employs incremental semantic integration and scene recognition accuracy constraints by incorporating three different bands of images (IR, RGB, and Grayscale) to fuse the data. Our approach includes a sparse semantic perception branch that captures critical semantic features, which are then integrated into the fusion network through a semantic injection module. This ensures that high-level vision tasks are adequately addressed. The scene fidelity path ensures that fusion features preserve all details required to reconstruct the original images. The importance and applicability of the proposed network are enhanced by employing an extra input in the form of a grayscale image, obtained by converting the RGB image for improved contrast, along with prominent target masks to enhance the visual quality of the fusion results. Our extensive analysis shows that the lightweight LightFusion network outperforms existing methods in both visual quality and semantic integrity, even under challenging conditions. The source code will be released at https://***/MI-HussainiLightFusion.
Deep learning libraries are the cornerstone of deep learning systems, and millions of deep learning applications are built on top of deep learning libraries. Due to long-term continuous running, many numerical operati...
详细信息
ISBN:
(数字)9781665451321
ISBN:
(纸本)9781665451338
Deep learning libraries are the cornerstone of deep learning systems, and millions of deep learning applications are built on top of deep learning libraries. Due to long-term continuous running, many numerical operations and heavy dependence on resources, deep learning libraries are prone to the effects of software aging. Aging in deep learning libraries can threaten the reliability of deep learning systems and make training and application of deep learning more time-consuming and expensive, causing users to lose confidence in it. In this work, we manually screened 138 bug reports containing aging-related bugs from a total of 13,694 bug reports in four popular deep learning libraries (i.e., TensorFlow, MXNET, PaddlePaddle and MindSpore). We analyzed the information in these 138 bug reports to answer three questions: What categories of aging-related bugs exist in deep learning libraries? What is the distribution of different categories of aging-related bugs in deep learning libraries? Which deep learning phases are most susceptible to software aging? Finally, we conducted a fine-grained taxonomy of aging-related bugs, including four levels and seventeen categories, and obtained eight important findings with corresponding practical implications.
作者:
Lianghe ShiWeiwei LiuSchool of Computer Science
Wuhan University and National Engineering Research Center for Multimedia Software Wuhan University and Institute of Artificial Intelligence Wuhan University and Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University
Gradual Domain Adaptation (GDA), in which the learner is provided with additional intermediate domains, has been theoretically and empirically studied in many contexts. Despite its vital role in security-critical scen...
Gradual Domain Adaptation (GDA), in which the learner is provided with additional intermediate domains, has been theoretically and empirically studied in many contexts. Despite its vital role in security-critical scenarios, the adversarial robustness of the GDA model remains unexplored. In this paper, we adopt the effective gradual self-training method and replace vanilla self-training with adversarial self-training (AST). AST first predicts labels on the unlabeled data and then adversarially trains the model on the pseudo-labeled distribution. Intriguingly, we find that gradual AST improves not only adversarial accuracy but also clean accuracy on the target domain. We reveal that this is because adversarial training (AT) performs better than standard training when the pseudo-labels contain a portion of incorrect labels. Accordingly, we first present the generalization error bounds for gradual AST in a multiclass classification setting. We then use the optimal value of the Subset Sum Problem to bridge the standard error on a real distribution and the adversarial error on a pseudo-labeled distribution. The result indicates that AT may obtain a tighter bound than standard training on data with incorrect pseudo-labels. We further present an example of a conditional Gaussian distribution to provide more insights into why gradual AST can improve the clean accuracy for GDA.
Scale invariant feature transform is a local point features extraction method. It can find those feature vectors in different scale space which are invariant for scale changes and rotations, and are flexible for illum...
详细信息
Pseudo-labelling is a popular technique in unsupervised domain adaptation for semantic segmentation. However, pseudo labels are noisy and inevitably have confirmation bias due to the discrepancy between source and tar...
详细信息
暂无评论