Deep learning models, particularly pre-trained language models (PLMs), have become increasingly important for a variety of applications that require text/language processing. However, these models are resource-intensi...
详细信息
Many active learning methods are based on the assumption that a learner simply asks for the true labels of some training data from annotators. Unfortunately, it is expensive to exactly annotate instances in real-world...
详细信息
1Introduction data are at the heart of intelligent rail systems in the high-speed transportation sector(Zhou et al.,2020;Ho et al.,2021;Hu et al.,2021;Chen et al.,2022).The core of modern intelligent railroad systems ...
详细信息
1Introduction data are at the heart of intelligent rail systems in the high-speed transportation sector(Zhou et al.,2020;Ho et al.,2021;Hu et al.,2021;Chen et al.,2022).The core of modern intelligent railroad systems typically includes rail transportation and equipment monitoring models learned from large datasets,which are often optimized for specific data and workloads(Zhu et al.,2019;Tan et al.,2020).While these intelligent railroad systems have been widely adopted and successful,their reliability and proper function will change as the data used *** the data used(on which the system operates)deviates from the fundamental constraints of the initial data(on which the system is trained)then,in that case,the system performance degrades,and the results inferred by the system model become unreliable,so the system model must be retrained and redeployed to re-store reliable inference results(Sharma and Chandel,2013).The mechanism for assessing the trustworthiness of intelligent rail system inferences is of paramount importance,especially for rail systems performing safety-critical or high-impact operations.
Distributed training of graph neural networks (GNNs) has become a crucial technique for processing large graphs. Prevalent GNN frameworks are model-centric, necessitating the transfer of massive graph vertex features ...
详细信息
Many active learning methods assume that a learner can simply ask for the full annotations of some training data from *** methods mainly try to cut the annotation costs by minimizing the number of annotation ***,annot...
详细信息
Many active learning methods assume that a learner can simply ask for the full annotations of some training data from *** methods mainly try to cut the annotation costs by minimizing the number of annotation ***,annotating instances exactly in many realworld classification tasks is still *** reduce the cost of a single annotation action,we try to tackle a novel active learning setting,named active learning with complementary labels(ALCL).ALCL learners ask only yes/no questions in some *** receiving answers from annotators,ALCL learners obtain a few supervised instances and more training instances with complementary labels,which specify only one of the classes to which the pattern does not *** are two challenging issues in ALCL:one is how to sample instances to be queried,and the other is how to learn from these complementary labels and ordinary accurate *** the first issue,we propose an uncertainty-based sampling strategy under this novel setup.
The goal of multi-label learning with missing labels (MLML) is assigning each testing instance multiple labels given training instances that have a partial set of labels. The most challenging issue is to complete the ...
详细信息
Graph convolutional networks (GCNs) are popular for a variety of graph learning tasks. ReRAM-based processing-in-memory (PIM) accelerators are promising to expedite GCN training owing to their in-situ computing capabi...
详细信息
Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model. A w...
详细信息
Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ. Most prior works focus on capturing the i...
Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ. Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability. In this work, we introduce a novel graph SPectral Alignment (SPA) framework to tackle the tradeoff. The core of our method is briefly condensed as follows: (i)-by casting the DA problem to graph primitives, SPA composes a coarse graph alignment mechanism with a novel spectral regularizer towards aligning the domain graphs in eigenspaces; (ii)-we further develop a finegrained message propagation module — upon a novel neighbor-aware self-training mechanism — in order for enhanced discriminability in the target domain. On standardized benchmarks, the extensive experiments of SPA demonstrate that its performance has surpassed the existing cutting-edge DA methods. Coupled with dense model analysis, we conclude that our approach indeed possesses superior efficacy, robustness, discriminability, and transferability. Code and data are available at: https://***/CrownX/SPA.
based on the wide application of cloud computing and wireless sensor networks in various fields,the Sensor-Cloud System(SCS)plays an indispensable role between the physical world and the network ***,due to the close c...
详细信息
based on the wide application of cloud computing and wireless sensor networks in various fields,the Sensor-Cloud System(SCS)plays an indispensable role between the physical world and the network ***,due to the close connection and interdependence between the physical resource network and computing resource network,there are security problems such as cascading failures between systems in the *** this paper,we propose a model with two interdependent networks to represent a sensor-cloud ***,based on the percolation theory,we have carried out a formulaic theoretical analysis of the whole process of cascading *** the system’s subnetwork presents a steady state where there is no further collapse,we can obtain the largest remaining connected subgroup components and the penetration ***,this result is the critical maximum that the coupled SCS can *** verify the correctness of the theoretical results,we further carried out actual simulation *** results show that a scale-free network priority attack’s percolation threshold is always less than that of ER network which is priority ***,when the scale-free network is attacked first,adding the power law exponentλcan be more intuitive and more effective to improve the network’s reliability.
暂无评论