Effective management of multi-intersection traffic signal control (MTSC) is vital for intelligent transportation systems. Multi-agent reinforcement learning (MARL) has shown promise in achieving MTSC. However, existin...
详细信息
We show that reals z which compute complete extensions of arithmetic have the random join property: for each random x T z there exists random y T z such that z ≡T x y. The same is true for the truth-table and the wea...
We show that degrees containing a complete extensions of arithmetic have the random join property: they are the supremum of any random real they compute, with another random real. The same is true for the truth-table ...
详细信息
We study effective randomness-preserving transformations of path-incompressible trees. There exists a path-incompressible tree with infinitely many paths, which does not compute any perfect pathwise-random tree. Spars...
详细信息
Network Function Virtualization (NFV) has already become an essential technology for improving the scalab.lity and flexibility of modern computer networks. The performance gap has become the main issue that impedes th...
Network Function Virtualization (NFV) has already become an essential technology for improving the scalab.lity and flexibility of modern computer networks. The performance gap has become the main issue that impedes the development of NFV. GPUs, with massive parallel processors, are advocated to accelerate the Virtualized Network Functions (VNFs). However, the special architecture and workflow of GPUs introduce new challenges, especially on the batched processing, and resource allocation. In this paper, we propose GPU-based NFV Acceleration framework (GNFA) with an efficient packet batching and resource allocation solution. Considering the increased latency caused by the accumulation of the GPU kernel invoking overhead, we first invent a latency reduction mechanism called SM Performance Compensation (SPC). A Partition and Adjustment based Batching and Resource Allocation (PABARA) algorithm that jointly considers batch size tuning and GPU thread allocation is also proposed. We have practically implemented GNFA and extensively evaluated its performance on some well-known VNFs. The experiment results show that GNFA can effectively promote the GPU resource utilization and improve the NFV performance in terms of per-packet latency.
Diffusion models have recently received a surge of interest due to their impressive performance for image restoration, especially in terms of noise robustness. However, existing diffusion-based methods are trained on ...
Diffusion models have recently received a surge of interest due to their impressive performance for image restoration, especially in terms of noise robustness. However, existing diffusion-based methods are trained on a large amount of training data and perform very well in-distribution, but can be quite susceptible to distribution shift. This is especially inappropriate for data-starved hyperspectral image (HSI) restoration. To tackle this problem, this work puts forth a self-supervised diffusion model for HSI restoration, namely Denoising Diffusion Spatio-Spectral Model (DDS2M), which works by inferring the parameters of the proposed Variational Spatio-Spectral Module (VS2M) during the reverse diffusion process, solely using the degraded HSI without any extra training data. In VS2M, a variational inference-based loss function is customized to enable the untrained spatial and spectral networks to learn the posterior distribution, which serves as the transitions of the sampling chain to help reverse the diffusion process. Benefiting from its self-supervised nature and the diffusion process, DDS2M enjoys stronger generalization ability to various HSIs compared to existing diffusion-based methods and superior robustness to noise compared to existing HSI restoration methods. Extensive experiments on HSI denoising, noisy HSI completion and super-resolution on a variety of HSIs demonstrate DDS2M’s superiority over the existing task-specific state-of-the-arts. Code is availab.e at: https://***/miaoyuchun/DDS2M.
In the current era of information overload, service recommendations have emerged as a valuable tool for enhancing the user experience. Among them, social recommendation models have shown promising results by incorpora...
In the current era of information overload, service recommendations have emerged as a valuable tool for enhancing the user experience. Among them, social recommendation models have shown promising results by incorporating social relationships to improve representation learning. However, most of these models lack fine-grained modeling of social user behavior, leading to a unified representation of users and a loss of expressiveness in user representations. To address this issue, we propose DisenHGCF, a new social recommendation approach based on disentangled hypergraph collab.rative filtering, to disentangle the representations of users and items at the granularity of social users’ intents. This approach aims to provide a more nuanced understanding of the user, leading to more accurate and personalized recommendations. To be specific, DisenHGCF leverages hypergraphs to represent the complex relationships among users, friends, and items. By using the hypergraph disentangling module based on attention, it is able to disentangle the user’s intents and generate users representations in cooperating their intents for recommendation tasks. Additionally, a contrastive learning task based on intent-weight perturbation is designed to enhance representational learning. The experimental results obtained from the BeiBei and Beidian datasets demonstrate the superiority of our proposed approach in comparison to previous baseline methods, as evidenced by higher Recall and NDCG scores.
Active automata learning in the framework of Angluin's L∗ algorithm has been applied to learning many kinds of automata models. In applications to timed models such as timed automata, the main challenge is to dete...
详细信息
Barrier certificates, serving as differential invariants that witness system safety, play a crucial role in the verification of cyber-physical systems (CPS). Prevailing computational methods for synthesizing barrier c...
详细信息
Hadoop Yarn is an open-source cluster manager responsible for resource management and job scheduling. However, data-driven applications are typically organized into workflows that consist of a series of jobs with depe...
Hadoop Yarn is an open-source cluster manager responsible for resource management and job scheduling. However, data-driven applications are typically organized into workflows that consist of a series of jobs with dependencies. Yarn does not manage users' workflows and only considers the current job rather than the entire workflow when scheduling. In practice, multiple workflows share the same Yarn cluster and are pre-assigned separate Yarn resource queues to avoid mutual interference. However, this coarse-grained resource division can sometimes result in low resource utilization and increased pending time of jobs on the Yarn queue. For inst.nce, one resource queue may have exhausted its quota while still having pending jobs, while other queues may have availab.e resources but cannot begin executing any jobs due to unfulfilled data dependencies. To address this problem, we propose a deep reinforcement learning-based workflow scheduling scheme that takes into account job dependencies, job priorities, and dynamic resource usage. The proposed approach can intelligently identify and utilize free windows of different resource queues. Our simulation results demonstrate that the proposed DRL-based workflow scheduling scheme can significantly reduce the average job latency compared to existing approaches.
暂无评论