Training only one deep model for large-scale cross-scene video foreground segmentation is challenging due to the off-the-shelf deep learning based segmentor relies on scene-specific structural information. This result...
详细信息
Precisely annotating objects with multiple labels is costly and has become a critical bottleneck in real-world multi-label classification tasks. Instead, deciding the relative order of label pairs is obviously less la...
ISBN:
(纸本)9781713845393
Precisely annotating objects with multiple labels is costly and has become a critical bottleneck in real-world multi-label classification tasks. Instead, deciding the relative order of label pairs is obviously less laborious than collecting exact labels. However, the supervised information of pairwise relevance ordering is less informative than exact labels. It is thus an important challenge to effectively learn with such weak supervision. In this paper, we formalize this problem as a novel learning framework, called multi-label learning with pairwise relevance ordering (PRO). We show that the unbiased estimator of classification risk can be derived with a cost-sensitive loss only from PRO examples. Theoretically, we provide the estimation error bound for the proposed estimator and further prove that it is consistent with respect to the commonly used ranking loss. Empirical studies on multiple datasets and metrics validate the effectiveness of the proposed method.
Learning universal time series representations applicable to various types of downstream tasks is challenging but valuable in real applications. Recently, researchers have attempted to leverage the success of self-sup...
详细信息
Learning binary classifiers from positive and unlabeled data (PUL) is vital in many real-world applications, especially when verifying negative examples is difficult. Despite the impressive empirical performance of re...
Learning binary classifiers from positive and unlabeled data (PUL) is vital in many real-world applications, especially when verifying negative examples is difficult. Despite the impressive empirical performance of recent PUL methods, challenges like accumulated errors and increased estimation bias persist due to the absence of negative labels. In this paper, we unveil an intriguing yet long-overlooked observation in PUL: resampling the positive data in each training iteration to ensure a balanced distribution between positive and unlabeled examples results in strong early-stage performance. Furthermore, predictive trends for positive and negative classes display distinctly different patterns. Specifically, the scores (output probability) of unlabeled negative examples consistently decrease, while those of unlabeled positive examples show largely chaotic trends. Instead of focusing on classification within individual time frames, we innovatively adopt a holistic approach, interpreting the scores of each example as a temporal point process (TPP). This reformulates the core problem of PUL as recognizing trends in these scores. We then propose a novel TPP-inspired measure for trend detection and prove its asymptotic unbiasedness in predicting changes. Notably, our method accomplishes PUL without requiring additional parameter tuning or prior assumptions, offering an alternative perspective for tackling this problem. Extensive experiments verify the superiority of our method, particularly in a highly imbalanced real-world setting, where it achieves improvements of up to 11.3% in key metrics. The code is available at https://***/wxr99/HolisticPU.
In multi-label learning,it is rather expensive to label instances since they are simultaneously associated with multiple ***,active learning,which reduces the labeling cost by actively querying the labels of the most ...
详细信息
In multi-label learning,it is rather expensive to label instances since they are simultaneously associated with multiple ***,active learning,which reduces the labeling cost by actively querying the labels of the most valuable data,becomes particularly important for multi-label learning.A good multi-label active learning algorithm usually consists of two crucial elements:a reasonable criterion to evaluate the gain of querying the label for an instance,and an effective classification model,based on whose prediction the criterion can be accurately *** this paper,we first introduce an effective multi-label classification model by combining label ranking with threshold learning,which is incrementally trained to avoid retraining from scratch after every *** on this model,we then propose to exploit both uncertainty and diversity in the instance space as well as the label space,and actively query the instance-label pairs which can improve the classification model *** experiments on 20 datasets demonstrate the superiority of the proposed approach to state-of-the-art methods.
The domain adversarial neural network(DANN)methods have been successfully proposed and attracted much attention *** DANNs,a discriminator is trained to discriminate the domain labels of features generated by a generat...
详细信息
The domain adversarial neural network(DANN)methods have been successfully proposed and attracted much attention *** DANNs,a discriminator is trained to discriminate the domain labels of features generated by a generator,whereas the generator attempts to confuse it such that the distributions between domains are *** a result,it actually encourages the whole alignment or transfer between domains,while the inter-class discriminative information across domains is not *** this paper,we present a Discrimination-Aware Domain Adversarial Neural Network(DA2NN)method to introduce the discriminative information or the discrepancy of inter-class instances across domains into deep domain ***2NN considers both the alignment within the same class and the separation among different classes across domains in knowledge transfer via multiple *** results show that DA2NN can achieve better classification performance compared with the DANN methods.
Learning from imbalanced datasets has recently received increasing research attention. Despite the remarkable results of AdaBoost in balanced situation, the imbalance problem remains to be solved. To address this, thi...
Learning from imbalanced datasets has recently received increasing research attention. Despite the remarkable results of AdaBoost in balanced situation, the imbalance problem remains to be solved. To address this, this paper proposes a novel method to improve the AdaBoost using a new error factor and a weighted voting parameter for a few classes and weak classifiers. Our method can weaken the dominant role of the majority class in iterative training by improving the classification ability of the minority class. The superiority of the proposed algorithm is finally proved theoretically and practically. Several real-world imbalance datasets in experiments as specific support, and our method outperforms previous algorithms, especially in the F-1 metric.
In online advertising, auto-bidding has become an essential tool for advertisers to optimize their preferred ad performance metrics by simply expressing high-level campaign objectives and constraints. Previous works d...
详细信息
Recent major milestones have successfully decoded non-invasive brain signals (e.g. functional Magnetic Resonance Imaging (fMRI) and electroencephalogram (EEG)) into natural language. Despite the progress in model desi...
详细信息
Most of the policy evaluation algorithms are based on the theories of Bellman Expectation and Optimality Equation, which derive two popular approaches - Policy Iteration (PI) and Value Iteration (VI). However, multi-s...
详细信息
暂无评论