Although attention weights have been commonly used as a means to provide explanations for deep learning models, the approach has been widely criticized due to its lack of faithfulness. In this work, we present a simpl...
详细信息
Inferring causal protein signalling networks from human immune system cell data is a promising approach to unravel the underlying tissue signalling biology and dysfunction in diseased cells,which has attracted conside...
详细信息
Inferring causal protein signalling networks from human immune system cell data is a promising approach to unravel the underlying tissue signalling biology and dysfunction in diseased cells,which has attracted considerable attention within the bioinformatics ***,Bayesian network(BN)techniques have gained significant popularity in inferring causal protein signalling networks from multiparameter single-cell ***,current BN methods may exhibit high computational complexity and ignore interactions among protein signalling molecules from different single cells.A novel BN method is presented for learning causal protein signalling networks based on parallel discrete artificial bee colony(PDABC),named ***,PDABC is a score-based BN method that utilises the parallel artificial bee colony to search for the global optimal causal protein signalling networks with the highest discrete K2 *** experimental results on several simulated datasets,as well as a previously published multi-parameter fluorescence-activated cell sorter dataset,indicate that PDABC surpasses the existing state-of-the-art methods in terms of performance and computational efficiency.
We present NNVISR - an open-source filter plugin for the VapourSynth video processing framework, which facilitates the application of neural networks for various kinds of video enhancing tasks, including denoising, su...
详细信息
Blockchain systems can become overwhelmed by a large number of transactions, leading to increased latency. As a consequence, latency-sensitive users must bid against each other and pay higher fees to ensure that their...
详细信息
This paper concerns machine learning approaches for software engineering, in particular for discovering hidden knowledge from data by learning latent vector representations of source code, and incorporating it to evol...
详细信息
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testing data by adapting a given model w.r.t. any testing sample. This task is particularly important when the test environ...
详细信息
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testing data by adapting a given model w.r.t. any testing sample. This task is particularly important when the test environment changes frequently. Although some recent attempts have been made to handle this task, we still face two key challenges: 1) prior methods have to perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications;2) while existing TTA solutions can significantly improve the test performance on out-of-distribution data, they often suffer from severe performance degradation on in-distribution data after TTA (known as catastrophic forgetting). To this end, we have proposed an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples for test-time entropy minimization. To alleviate forgetting, EATA introduces a Fisher regularizer estimated from test samples to constrain important model parameters from drastic changes. However, in EATA, the adopted entropy loss consistently assigns higher confidence to predictions even when the samples are underlying uncertain, leading to overconfident predictions that underestimate the data uncertainty. To tackle this, we further propose EATA with Calibration (EATA-C) to separately exploit the reducible model uncertainty and the inherent data uncertainty for calibrated TTA. Specifically, we compare the divergence between predictions from the full network and its sub-networks to measure the reducible model uncertainty, on which we propose a test-time uncertainty reduction strategy with divergence minimization loss to encourage consistent predictions instead of overconfident ones. To further re-calibrate predicting confidence on different samples, we utilize the disagreement among predicted labels as an indicator of the data uncertainty. Based on this, we devise a min-max entropy
The susceptibility of current Deep Neural Networks (DNNs) to adversarial examples has been a significant concern in deep learning methods. In particular, sparse adversarial examples represent a specific category of ad...
详细信息
Time series data are pervasive in varied real-world applications, and accurately identifying anomalies in time series is of great importance. Many current methods are insufficient to model long-term dependence, wherea...
Time series classification has gained significant attention in the data mining field in recent years. Despite the proposal of numerous methods over the past decades, most of these approaches focus on feature extractio...
详细信息
In recent years, China is the world's largest PV producer, the most complete PV industry chain, the largest PV market and the largest PV power generation capacity. Under the dual-carbon target, the future developm...
详细信息
暂无评论