This work studies the application of nonorthogonal transmission in beamforming (BF) based forward links for next-generation satellite communication (SATCOM) with multiple gateways. With the aim of enhancing the throug...
详细信息
This work studies the application of nonorthogonal transmission in beamforming (BF) based forward links for next-generation satellite communication (SATCOM) with multiple gateways. With the aim of enhancing the throughput of BF SATCOM systems, the state-of-the-art nonorthogonal multiple access (NOMA) technique is exploited by serving multiple users per beam in the same time slot. In this regard, the feeder link limitations and multibeam satellite payload constraints must be considered for BF design and power allocation (PA) optimization in nonorthogonal SATCOM. To address these challenges, distributed resource optimization strategies are investigated for BF and flexible payload power resource allocation in multigateway (multi-GW) nonorthogonal SATCOM systems. Specifically, a per-feed available power-constrained BF strategy via maximization of the worst-user signal-to-leakage-and-noise ratio (SLNR) is explored with local channel state information (CSI) for a distributed operation of GWs. As an upper-bound performance limit, a centralized multilayer BF strategy is processed in a central unit with full global CSI and data sharing. After the BF direction optimization, a weighted sum-rate maximization-based (WSRM-based) power resourceoptimization strategy is locally applied at each GW to efficiently use the power resources for higher performance increment. The nonconvex WSRM problem, under the constraints of the practical satellite payload power budget, successful successive interference cancellation (SIC) decoding, and minimum data rate, is recast into an equivalent weighted sum-MSE minimization (WMMSE) counterpart for a tractable solution. Finally, an efficient user scheduling is designed to enable the operator to capture a substantial system-throughput gain. Accurate simulations are conducted with the near-to-real coverage area (footprints), the random distributions of users, and interference, relying on geographical locations of users. The results over a realistic s
With the full development of intelligent mobile communications, wireless mixed reality (MR) provides a more visually immersive experience and stronger interaction with environments than virtual reality (VR) and augmen...
详细信息
With the full development of intelligent mobile communications, wireless mixed reality (MR) provides a more visually immersive experience and stronger interaction with environments than virtual reality (VR) and augmented reality (AR). However, the asymmetric characteristic of wireless MR traffic creates a huge challenge to current mobile networks. Dynamic time division duplex (D-TDD) is considered as a promising technology to improve wireless MR users' quality of experience (QoE) due to its potentials and advantages in delivering asymmetric traffic. Therefore, in this paper, we propose a QoE-driven distributed multidimensional resource allocation (MRA) supplemented by inter-cell interference (ICI) mitigation scheme for wireless MR in multi-cell D-TDD systems. First, to improve QoE of MR users, we formulate the joint optimization of subframe configuration, channel assignment and computation offloading as a mixed-integer nonlinear programming problem. A novel fully-decentralized multi-agent deep Q-network (DQN) algorithm is developed to solve the problem. Then, to mitigate ICI, a water filling based power control algorithm is investigated to minimize the total power of each small base station and its associated MR users. Simulation results demonstrate that our proposed scheme improves QoE of MR users in a realizable way as compared to existing schemes.
This paper explores integrating blockchain technology into multi-agent systems (MAS) to enhance distributed node resourceoptimization. Key challenges addressed include task decision-making, task allocation, and resou...
详细信息
ISBN:
(纸本)9798350351606;9798350351590
This paper explores integrating blockchain technology into multi-agent systems (MAS) to enhance distributed node resourceoptimization. Key challenges addressed include task decision-making, task allocation, and resource scheduling, with a focus on minimizing energy consumption and latency. Blockchain ensures secure, efficient coordination among nodes, mitigating issues like data privacy leaks and system failures. The study also leverages federated learning for secure decentralized machine learning model training. Simulation results demonstrate the enhanced performance, security, and scalability of MAS with blockchain, paving the way for more efficient distributed computing environments.
Although Deep Neural Networks (DNN) have become the backbone technology of several Internet of Things (IoT) applications, their execution in resource -constrained devices remains challenging. To cater for these challe...
详细信息
Although Deep Neural Networks (DNN) have become the backbone technology of several Internet of Things (IoT) applications, their execution in resource -constrained devices remains challenging. To cater for these challenges, collaborative deep inference conducted by IoT devices was introduced. However, the prevalence of DNN computation suffers from severe privacy problems, e.g. data -reverse and model leakage. Particularly, malicious participants can accurately recover the received data to access sensitive information. Furthermore, the system is composed of heterogeneous data -sources represented by different DNN models that wish to execute classifications without exposing their data and models. Though, relaying the trained models to a centralized unit managing the collaboration leads to major risks because some features can be revealed through these models, in addition to dependency and scalability problems. In this paper, we present an approach that targets the privacy of collaborative inference via controlling the amount of data assigned to different participants, to prevent them from reversing attempts. Moreover, each independent data -source requesting inference will be responsible to manage the distribution of its DNN locally. In this context, different sources are required to compete over the pervasive resources while cooperating to maintain privacy welfare. We formulate this methodology, as an integer programming problem, where we establish a tradeoff between the latency of co -inference and the privacy required by heterogeneous entities. A distributed solution scheme is also developed based on the Lagrangian dual problem. Next, to relax the optimization, we shape our approach as a cooperative and competitive Multi -Agent Reinforcement Learning (MARL) that supports heterogeneous/independent agents. Our comprehensive simulations demonstrated that our method yields results on par with those of a single RL agent in terms of action performance, while maintaining t
The increase of Internet of Things devices and the rise of more computationally intense applications presents challenges for future Internet of Things architectures. We envision a future in which edge, fog, and cloud ...
详细信息
The increase of Internet of Things devices and the rise of more computationally intense applications presents challenges for future Internet of Things architectures. We envision a future in which edge, fog, and cloud devices work together to execute future applications. Because the entire application cannot run on smaller edge or fog devices, we will need to split the application into smaller application components. These application components will send event messages to each other to create a single application from multiple application components. The execution location of the application components can be optimized to minimize the resource consumption. In this paper, we describe the distributed Uniform Stream (DUST) framework that creates an abstraction between the application components and the middleware which is required to make the execution location transparent to the application component. We describe a real-world application that uses the DUST framework for platform transparency. Next to the DUST framework, we also describe the distributed DUST Coordinator, which will optimize the resource consumption by moving the application components to a different execution location. The coordinators will use an adapted version of the Contract Net Protocol to find local minima in resource consumption.
This paper studies the joint design of routing and resource allocation algorithms in cognitive radio based wireless mesh networks. The mesh nodes utilize cognitive overlay mode to share the spectrum with primary users...
详细信息
This paper studies the joint design of routing and resource allocation algorithms in cognitive radio based wireless mesh networks. The mesh nodes utilize cognitive overlay mode to share the spectrum with primary users. Prior to each transmission, mesh nodes sense the wireless medium to identify available spectrum resources. Depending on the primary user activities and traffic characteristics, the available spectrum resources will vary between mesh transmission attempts, posing a challenge that the routing and resource allocation algorithms have to deal with to guarantee timely delivery of the network traffic. To capture the channel availability dynamics, the system is analyzed from a queuing theory perspective, and the joint routing and resource allocation problem is formulated as a non-linear integer programming problem. The objective is to minimize the aggregate end-to-end delay of all the network flows. A distributed solution scheme is developed based on the Lagrangian dual problem. Numerical results demonstrate the convergence of the distributed solution procedure to the optimal solution, as well as the performance gains compared to other design methods. It is shown that the joint design scheme can accommodate double the traffic load, or achieve half the delay compared to the disjoint methods.
The increasing demand coupled with expanding installation of distributedresources call for the development of smart technologies to control and optimize distribution system operations. In this paper, a distributed ge...
详细信息
The increasing demand coupled with expanding installation of distributedresources call for the development of smart technologies to control and optimize distribution system operations. In this paper, a distributed generation and storage optimization algorithm is proposed using pricing signals as distribution locational marginal pricing (DLMP). This signal is used to optimize the day-ahead operation planning of distributed generation and energy storage. A distribution level state estimation algorithm is also designed. The main conclusion is that the proposed optimal control and state estimation will improve the energy efficiency and economic benefits in a digitally controlled distribution power system.
暂无评论