Introduction of database Olfaction is one of the oldest chemosensory systems in chordates,playing crucial roles in their foraging,predator evasion,social communication,mating and parental care(Guo et al.,2023;Li and L...
详细信息
Introduction of database Olfaction is one of the oldest chemosensory systems in chordates,playing crucial roles in their foraging,predator evasion,social communication,mating and parental care(Guo et al.,2023;Li and Liberles,2015;Liberles,2014).The initial step of olfaction is the binding and activation of olfactory receptors(ORs)by odorants in a combinatorial way(Malnic et al.,1999).
Existing methods estimate treatment effects from observational data and assume that covariates are all confounders. However, observed covariates may not directly represent confounding variables that influence both tre...
The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performan...
This paper presents our error tolerable system for coreference resolution in CoNLL-2011(Pradhan et al., 2011) shared task (closed track). Different from most previous reported work, we detect mention candidates based ...
详细信息
The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performan...
ISBN:
(纸本)9798331314385
The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performance under long-tail distributions. In this paper, we explore AUC optimization methods in the context of pixel-level long-tail semantic segmentation, a much more complicated scenario. This task introduces two major challenges for AUC optimization techniques. On one hand, AUC optimization in a pixel-level task involves complex coupling across loss terms, with structured inner-image and pairwise inter-image dependencies, complicating theoretical analysis. On the other hand, we find that mini-batch estimation of AUC loss in this case requires a larger batch size, resulting in an unaffordable space complexity. To address these issues, we develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability. Additionally, we design a Tail-Classes Memory Bank (T-Memory Bank) to manage the significant memory demand. Finally, comprehensive experiments across various benchmarks confirm the effectiveness of our proposed AUCSeg method. The code is availab.e at https://***/boyuh/AUCSeg.
Diffusion models are initially designed for image generation. Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative task...
ISBN:
(纸本)9798331314385
Diffusion models are initially designed for image generation. Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative tasks such as semantic segmentation. Given numerous activations, selecting a small yet effective subset poses a fundamental problem. To this end, the early study of this field performs a large-scale quantitative comparison of the discriminative ability of the activations. However, we find that many potential activations have not been evaluated, such as the queries and keys used to compute attention scores. Moreover, recent advancements in diffusion architectures bring many new activations, such as those within embedded ViT modules. Both combined, activation selection remains unresolved but overlooked. To tackle this issue, this paper takes a further step with a much broader range of activations evaluated. Considering the significant increase in activations, a full-scale quantitative comparison is no longer operational. Instead, we seek to understand the properties of these activations, such that the activations that are clearly inferior can be filtered out in advance via simple qualitative evaluation. After careful analysis, we discover three properties universal among diffusion models, enabling this study to go beyond specific models. On top of this, we present effective feature selection solutions for several popular diffusion models. Finally, the experiments across multiple discriminative tasks validate the superiority of our method over the SOTA competitors. Our code is availab.e at https://***/Darkbblue/generic-diffusion-feature.
Many active learning methods assume that a learner can simply ask for the full annotations of some training data from *** methods mainly try to cut the annotation costs by minimizing the number of annotation ***,annot...
详细信息
Many active learning methods assume that a learner can simply ask for the full annotations of some training data from *** methods mainly try to cut the annotation costs by minimizing the number of annotation ***,annotating instances exactly in many realworld classification tasks is still *** reduce the cost of a single annotation action,we try to tackle a novel active learning setting,named active learning with complementary lab.ls(ALCL).ALCL learners ask only yes/no questions in some *** receiving answers from annotators,ALCL learners obtain a few supervised instances and more training instances with complementary lab.ls,which specify only one of the classes to which the pattern does not *** are two challenging issues in ALCL:one is how to sample instances to be queried,and the other is how to learn from these complementary lab.ls and ordinary accurate *** the first issue,we propose an uncertainty-based sampling strategy under this novel setup.
Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely...
ISBN:
(纸本)9798331314385
Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely, diffusion feature. We discover that diffusion feature has been hindered by a hidden yet universal phenomenon that we call content shift. To be specific, there are content differences between features and the input image, such as the exact shape of a certain object. We locate the cause of content shift as one inherent characteristic of diffusion models, which suggests the broad existence of this phenomenon in diffusion feature. Further empirical study also indicates that its negative impact is not negligible even when content shift is not visually perceivable. Hence, we propose to suppress content shift to enhance the overall quality of diffusion features. Specifically, content shift is related to the information drift during the process of recovering an image from the noisy input, pointing out the possibility of turning off-the-shelf generation techniques into tools for content shift suppression. We further propose a practical guideline named GATE to efficiently evaluate the potential benefit of a technique and provide an implementation of our methodology. Despite the simplicity, the proposed approach has achieved superior results on various tasks and datasets, validating its potential as a generic booster for diffusion features. Our code is availab.e at https://***/Darkbblue/diffusion-content-shift.
Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely...
Diffusion models are initially designed for image generation. Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative task...
暂无评论