In the era of 6G networks, the demand for robust and trustworthy communication systems has become more critical than ever, particularly in MIMO-enabled flying ad hoc networks (FANETs), also known as low-altitude netwo...
详细信息
Semantic Communication (SemCom) plays a pivotal role in 6G networks, offering a viable solution for future efficient communication. Through nonlinear mapping of semantic representations, Deep Learning (DL)-based seman...
详细信息
In next-generation Internet services, such as Metaverse, the mixed reality (MR) technique plays a vital role. Yet the limited computing capacity of the user-side MR headset-mounted device (HMD) prevents its further ap...
In next-generation Internet services, such as Metaverse, the mixed reality (MR) technique plays a vital role. Yet the limited computing capacity of the user-side MR headset-mounted device (HMD) prevents its further application, especially in scenarios that require a lot of computation. One way out of this dilemma is to design an efficient information sharing scheme among users to replace the heavy and repetitive computation. In this paper, we propose a free-space information sharing mechanism based on full-duplex device-to-device (D2D) semantic communications. Specifically, the view images of MR users in the same real-world scenario may be analogous. Therefore, when one user (i.e., a device) completes some computation tasks, the user can send his own calculation results and the semantic features extracted from the user's own view image to nearby users (i.e., other devices). On this basis, other users can use the received semantic features to obtain the spatial matching of the computational results under their own view images without repeating the computation. Using generalized small-scale fading models, we analyze the key performance indicators of full-duplex D2D communications, including channel capacity and bit error probability, which directly affect the transmission of semantic information. Finally, the numerical analysis experiment proves the effectiveness of our proposed methods.
The metaverse is defined as a three-dimensional virtual-real fusion network focused on social connection. Edge computing can empower the metaverse by providing computing resources to realize real-time motion tracking,...
The metaverse is defined as a three-dimensional virtual-real fusion network focused on social connection. Edge computing can empower the metaverse by providing computing resources to realize real-time motion tracking, visual or auditory rendering and virtual scene dynamic change at the edge of networks. Users enjoy the immersive experience provided by the Metaverse Service Provider (MSP), and interact through digital avatars in the virtual world by requesting metaverse services. In this paper, we formulate the edge computing resources pricing, metaverse service rendering bitrate pricing and request problem in the metaverse. Especially, we model the interactions among edge servers, the MSP and Metaverse Service Users (MSUs) as a hierarchical three-stage Stackelberg game. The problem of joining edge-enabled metaverse services is also studied with the consideration of network externality, including the network effect and congestion effect. Based on the backward induction method, we derive the optimal computing resources pricing of edge servers, rendering bitrate pricing of the MSP, and rendering bitrate requests of MSUs. We also prove the existence and uniqueness of the Stackelberg game equilibrium. Simulation results show that the network effect can bring more positive utilities to entities when the social tie coefficient among MSUs is higher. Besides, all entities in the edge-enabled metaverse are affected negatively by the congestion effect.
In response to the needs of 6G global communications, satellite communication networks have emerged as a key solution. However, the large-scale development of satellite communication networks is constrained by the com...
详细信息
Web 3.0 is regarded as a revolutionary paradigm that enables users to securely manage data without a centralized authority. Blockchains, which enable data to be managed in a decentralized and transparent manner, are k...
详细信息
This article presents a 17.7-20.2 GHz eight-element four-beam RF-beamforming transmitter in 65-nm CMOS for satellite communication (SATCOM). The transmitter utilizes an analog scheme in the variable-gain amplifier (VG...
详细信息
Deoxyribonucleic acid (DNA) has become an ideal medium for long-term storage and retrieval due to its extremely high storage density and long-term stability. But access efficiency is an existing bottleneck in DNA stor...
详细信息
Optimizing various wireless user tasks poses a significant challenge for networking systems because of the expanding range of user requirements. Despite advancements in Deep Reinforcement Learning (DRL), the need for ...
详细信息
ISBN:
(数字)9798350361261
ISBN:
(纸本)9798350361278
Optimizing various wireless user tasks poses a significant challenge for networking systems because of the expanding range of user requirements. Despite advancements in Deep Reinforcement Learning (DRL), the need for customized optimization tasks for individual users complicates developing and applying numerous DRL models, leading to substantial computation resource and energy consumption and can lead to inconsistent outcomes. To address this issue, we propose a novel approach utilizing a Mixture of Experts (MoE) framework, augmented with Large Language Models (LLMs), to analyze user objectives and constraints effectively, select specialized DRL experts, and weigh each decision from the participating experts. Specifically, we develop a gate network to oversee the expert models, allowing a collective of experts to tackle a wide array of new tasks. Furthermore, we innovatively substitute the traditional gate network with an LLM, leveraging its advanced reasoning capabilities to manage expert model selection for joint decisions. Our proposed method reduces the need to train new DRL models for each unique optimization problem, decreasing energy consumption and AI model implementation costs. The LLMenabled MoE approach is validated through a general maze navigation task and a specific network service provider utility maximization task, demonstrating its effectiveness and practical applicability in optimizing complex networking systems.
Noisy partial label learning (noisy PLL) is an important branch of weakly supervised learning. Unlike PLL where the ground-truth label must conceal in the candidate label set, noisy PLL relaxes this constraint and all...
Noisy partial label learning (noisy PLL) is an important branch of weakly supervised learning. Unlike PLL where the ground-truth label must conceal in the candidate label set, noisy PLL relaxes this constraint and allows the ground-truth label may not be in the candidate label set. To address this challenging problem, most of the existing works attempt to detect noisy samples and estimate the ground-truth label for each noisy sample. However, detection errors are unavoidable. These errors can accumulate during training and continuously affect model optimization. To this end, we propose a novel framework for noisy PLL with theoretical interpretations, called "Adjusting Label Importance Mechanism (ALIM)". It aims to reduce the negative impact of detection errors by trading off the initial candidate set and model outputs. ALIM is a plug-in strategy that can be integrated with existing PLL approaches. Experimental results on multiple benchmark datasets demonstrate that our method can achieve state-of-the-art performance on noisy PLL. Our code is available at: https://***/zeroQiaoba/ALIM.
暂无评论