Network Function Virtualization (NFV) is a rapidly developing network architecture concept, in which virtualization technologies transform network functions in hardware devices to software middleboxes. For reasons suc...
详细信息
With the rapid development of convolutional neural networks, many CNN-based methods have emerged and made promising progress in the field of crowd counting. However, dealing with extremely scale variation remains a ch...
详细信息
Collecting labels from crowds is a feasible solution for large knowledge graph creation. However, errors in crowd-sourced labels damage the predictive model training. Although various methods have been developed to co...
详细信息
Collecting labels from crowds is a feasible solution for large knowledge graph creation. However, errors in crowd-sourced labels damage the predictive model training. Although various methods have been developed to correct noisy labels, these traditional methods on the one hand do not take full advantage of the information from crowdsourced labels, and on the other hand do not perform well facing complicated annotation conditions, especially when some crowd workers make highly correlated mistakes. To address this issue, this paper proposes a novel learning from crowds method, which can simultaneously correct label noises and learn predictive models. The proposed method trains two data classifiers and a crowd aggregator. Each data classifier divides crowdsourced labeled data into a clean set and a noisy set and predicts the labels of the instances with a large loss in the noisy set. These instances are removed from the noisy set and put into the clean set. Then, the loss of the classifier is updated using the new clean and noisy sets. The data classifier with smaller loss is selected and trained with the crowd aggregator by maximizing their mutual information. Simultaneously, the smaller loss is back propagated to the other data classifier for training it. Experimental results on both synthetic and real-world datasets consistently show that the proposed method significantly outperforms the state-of-the-art methods in noise correction and predictive model learning.
Mobile edge computing (MEC) has been proposed as a promising paradigm to provide mobile devices with both satisfactory computing capacity and task latency. One key issue in MEC is computation offloading (CompOff), whi...
Mobile edge computing (MEC) has been proposed as a promising paradigm to provide mobile devices with both satisfactory computing capacity and task latency. One key issue in MEC is computation offloading (CompOff), which has attracted numerous research interests. Most existing CompOff approaches are developed based on iterative programming (IterProg), that calculates a CompOff action based on system dynamics each time mobile tasks arrive. Due to the heavy dependency of IterProg on reliable system dynamics, as well as the online computational burden, recent years have seen a popular trend to develop CompOff approaches based on deep reinforcement learning (DRL), which could generate real-time model-free CompOff actions. However, due to the intrinsic poor generalization of DRL, it is hard to directly apply DRL-based policies in new MEC environments, and long-time fine-tuning is often required. To address the challenge, this paper proposes a fast transferable CompOff framework (named TransOff), based on the idea of embedded reinforcement learning. Specifically, TransOff is composed of multiple primitive CompOff policies (pCOPs) and a multiplicative composition function (MCF). The pCOPs and MCF are pre-trained in a diverse variety of MEC environments. When encountering new MEC environments, pCOPs are kept fixed to prevent catastrophic forgetting of pre-trained CompOff skills, while only MCF is fine-tuned to produce new compositions of pCOPs to achieve fast transfer. We conduct extensive experiments via both numerical simulation and real testbed, indicating the fast transfer ability of TransOff compared to the state-of-the-art DRL-based and meta learning-based CompOff approaches.
作者:
Xin ZouWeiwei LiuSchool of Computer Science
Wuhan University and National Engineering Research Center for Multimedia Software Wuhan University and Institute of Artificial Intelligence Wuhan University and Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University
Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. Interestingly, we find that existing OOD gene...
Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. Interestingly, we find that existing OOD generalization methods are vulnerable to adversarial attacks. This motivates us to study OOD adversarial robustness. We first present theoretical analyses of OOD adversarial robustness in two different complementary settings. Motivated by the theoretical results, we design two algorithms to improve the OOD adversarial robustness. Finally, we conduct experiments to validate the effectiveness of our proposed algorithms. Our code is available at https://***/ZouXinn/OOD-Adv.
DNA is not only the genetic material of life, but also a favorable material for a new computing model. Various research works based on DNA computing have been carried out in recent years. DNA sequence design is the fo...
详细信息
Identity tracing is a technology that uses the selection and collection of identity attributes of the object to be tested to discover its true identity, and it is one of the most important foundational issues in the f...
详细信息
Identity tracing is a technology that uses the selection and collection of identity attributes of the object to be tested to discover its true identity, and it is one of the most important foundational issues in the field of social security prevention. However, traditional identity recognition technologies based on single attributes have difficulty achieving ultimate recognition accuracy, where deep learning-based model always lacks interpretability. Multivariate attribute collaborative identification is a possible key way to overcome mentioned recognition errors and low data quality problems. In this paper, we propose the "Trustworthy Identity Tracing" (TIT) task and a Multi-attribute Synergistic Identification based TIT framework. We first established a novel identity model based on identity entropy theoretically. The individual conditional identity entropy and core identification set are defined to reveal the intrinsic mechanism of multivariate attribute collaborative identification. Based on the proposed identity model, we propose a trustworthy identity tracing framework (TITF) with multi-attribute synergistic identification to determine the identity of unknown objects, which can optimize the core identification set and provide an interpretable identity tracing process. Actually, the essence of identity tracing is revealed to be the process of the identity entropy value converges to zero. To cope with the lack of test data, we construct a dataset of 1000 objects to simulate real-world scenarios, where 20 identity attributes are labeled to trace unknown object identities. The experiment results conducted on the mentioned dataset show the proposed TITF algorithm can achieve satisfactory identification performance. Furthermore, the proposed TIT task explores the interpretable quantitative mathematical relationship between attributes and identity, which can help expand the identity representation from a single-attribute feature domain to a multi-attribute collaborat
Out-of-distribution detection is critical to a reliable application of deep neural networks. To reduce model uncertainty, we propose a simple yet effective approach of ensemble knowledge distillation. We blend ensembl...
Out-of-distribution detection is critical to a reliable application of deep neural networks. To reduce model uncertainty, we propose a simple yet effective approach of ensemble knowledge distillation. We blend ensemble model and knowledge distillation to improve model generalization on indomain recognition, thereby yielding accurate and robust out-of-distribution detection. The former effectively expands data recognition feature space, while the latter further regularizes the model through knowledge distillation, enhancing in-domain feature recognition. This effective integration successfully yields lower model uncertainty on in-domain feature recognition and improves anomaly detection in a more scalable manner. We justify our approach through extensive experiments on various benchmarks, demonstrating its significant improvement in out- of-distribution detection. We validate our approach with a variety of up-to-date DNNs, like Vision Transformer.
Mobile Edge Computing (MEC) is an emerging computing paradigm that offloads cloud center functions to the edge server. In a MEC environment, edge servers' limited storage and processing capacity require selective ...
详细信息
ISBN:
(数字)9798350368536
ISBN:
(纸本)9798350368543
Mobile Edge Computing (MEC) is an emerging computing paradigm that offloads cloud center functions to the edge server. In a MEC environment, edge servers' limited storage and processing capacity require selective service caching, where only a part of required content can be placed directly upon the destination edge server and the remaining at remote cloud end. A primary challenge in this context is the creation of an effective and responsive service caching algorithm that improves the Quality of Service (QoS) perceived by users while reducing operational costs. This study applies an
$M$
/ G /1 queuing model as the foundational framework and transforms the service caching problem as an adversarial semi-bandit problem. We propose a delay-aware Genetic-Follow-the-Regularized-Leader (GFRL) algorithm, which is capable of guiding decentralized caching decisions. Experimental results indicate that GFRL outperforms traditional methods across various performance metrics.
The gateway placement optimization (GPO) is a critical issue in satellite network that aims to obtain an optimal scheme to deploy different gateways in a network to achieve high-performed satellite-ground communicatio...
详细信息
ISBN:
(数字)9798331534318
ISBN:
(纸本)9798331534325
The gateway placement optimization (GPO) is a critical issue in satellite network that aims to obtain an optimal scheme to deploy different gateways in a network to achieve high-performed satellite-ground communication. Existing studies usually treat the GPO as a single-objective optimization problem, which significantly deviates from real-world scenarios and limits practical applicability. However, numerous performances like the gateway traffic load balancing, the distance between gateways, the traffic load of satellites, and the number of gateways within the satellite’s management range should be considered in the GPO problem, indicating that it is inherently a many-objective optimization problem (MaOP). Therefore, in this paper, a new mathematical model is designed which constructs the GPO as an MaOP. To address it, this paper further proposes a bi-velocity coevolutionary multiswarm particle swarm optimization (BCMPSO) algorithm. The BCMPSO follows the multiple populations for multiple objectives framework, running different populations in parallel, each optimizing a specific objective to search different parts of the Pareto front sufficiently. Meanwhile, a binary PSO with a bi-velocity update mechanism and a discrete position update mechanism is proposed as the optimizer in each population. Comparison experiments with several state-of-the-art methods confirm the effectiveness and competitiveness of the BCMPSO for many-objective GPO.
暂无评论