As a new communication paradigm, neural network-driven semantic communication (SemCom) has demonstrated considerable promise in enhancing resource efficiency by transmitting the semantics rather than all bits of sourc...
As a new communication paradigm, neural network-driven semantic communication (SemCom) has demonstrated considerable promise in enhancing resource efficiency by transmitting the semantics rather than all bits of source information. Using a large semantic coding model can accurately distil semantics, and significantly save the required bandwidth. However, this consumes a large amount of computing resources, which are also precious in the network. In this paper, we investigate the joint computing resources and bandwidth allocation for SemCom networks. We first introduce the computing latency model in SemCom, and formulate the joint computing resources and bandwidth allocation optimization problem with the objective of maximizing semantic accuracy. Then, we transform this problem into a deep reinforcement learning framework and exploit a multi-agent proximal policy optimization to solve it. Numerical results show that the proposed method significantly improves the average semantic accuracy in the resource-constrained cases, compared with the two baselines.
Wind power is one of the world's fastest-growing renewable energy resources. To achieve high levels of wind power penetration in the power supply, it will be necessary for wind power producers (WPPs) to contribute...
详细信息
Digital twins have a major potential to form a significant part of urban management in emergency planning, as they allow more efficient designing of the escape routes, better orientation in exceptional situations, and...
详细信息
Organisms have evolved innate and acquired immune systems to defend against pathogens like coronaviruses. Similarly, power networks, threatened by cyber-attacks, desire cyber-immunity. Inspired by the immunology resea...
详细信息
This article supplies a proposed approach Neuro Fuzzy Controller (NFC)-Adaptive Backstepping Controller (ABC)-Space Vector Modulation (SVM) for a five-level NPC inverter-Double Stator Interior Permanent Magnet Synchro...
详细信息
ISBN:
(数字)9798350356199
ISBN:
(纸本)9798350356205
This article supplies a proposed approach Neuro Fuzzy Controller (NFC)-Adaptive Backstepping Controller (ABC)-Space Vector Modulation (SVM) for a five-level NPC inverter-Double Stator Interior Permanent Magnet Synchronous Motor (DSIPMSM). This paper’s primary objective is to enhance DSIPMSM’s performance by the use of strong control using the proposed approach. Given its benefits—including high torque density, high efficiency, high power density, and minimal maintenance—DSIPMSM, have recently been the focus of much research. Using the input-output linearization approach and Lyapunov stability theory, asymptotically stable trajectory-following dynamics are demonstrated using nonlinear adaptive backstepping control. The use of fuzzy logic and intelligence-based controllers are increasingly being used to enhance traditional control’s performance and output. In addition, the fuzzy logic controller has a lot of flaws that are presently being fixed by the extremely effective NFC tool. The proposed method’s worth was demonstrated by the simulation results in terms of durability, effective tracking dynamics, precision, and good disturbance rejection with little ripple.
Minimax problems have attracted much attention due to various applications in constrained optimization problems and zero-sum games. Identifying saddle points within these problems is crucial, and saddle flow dynamics ...
详细信息
ISBN:
(数字)9798350316339
ISBN:
(纸本)9798350316346
Minimax problems have attracted much attention due to various applications in constrained optimization problems and zero-sum games. Identifying saddle points within these problems is crucial, and saddle flow dynamics offer a straightforward yet useful approach. This study focuses on a class of bilinearly coupled minimax problems with strongly convex-linear objective functions. We design an accelerated algorithm based on saddle flow dynamics, achieving a convergence rate beyond the stereotype limit (the strong convexity constant). The algorithm is derived from a sequential two-step transformation of a given objective function. First, a change of variables is applied to render the objective function better-conditioned, introducing strong concavity (from linearity) while preserving strong convexity. Second, proximal regularization, when staggered with the first step, further enhances the strong convexity of the objective function by shifting some of the obtained strong concavity. After these transformations, saddle flow dynamics based on the new objective function can be tuned for accelerated exponential convergence. Besides, such an approach can be extended to weakly convex-weakly concave functions and still guarantees exponential convergence to one stationary point. The theory is verified by a numerical test on an affine equality-constrained convex optimization problem.
Low-rank structures have been observed in several recent empirical studies in many machine and deep learning problems, where the loss function demonstrates significant variation only in a lower dimensional subspace. W...
Low-rank structures have been observed in several recent empirical studies in many machine and deep learning problems, where the loss function demonstrates significant variation only in a lower dimensional subspace. While traditional gradient-based optimization algorithms are computationally costly for high-dimensional parameter spaces, such low-rank structures provide an opportunity to mitigate this cost. In this paper, we aim to leverage low-rank structures to alleviate the computational cost of first-order methods and study Adaptive Low-Rank Gradient Descent (AdaLRGD). The main idea of this method is to begin the optimization procedure in a very small subspace and gradually and adaptively augment it by including more directions. We show that for smooth and strongly convex objectives and any target accuracy $\epsilon$ , AdaLRGD's complexity is $\mathcal{O}(r\ln(r/\epsilon))$ for some rank $r$ no more than dimension $d$ . This significantly improves upon gradient descent's complexity of $\mathcal{O}(d\ln(1/\epsilon))$ when $r\ll d$ . We also propose a practical implementation of AdaLRGD and demonstrate its ability to leverage existing low-rank structures in data.
As distributed learning applications such as Federated Learning, the Internet of Things (IoT), and Edge Computing grow, it is critical to address the shortcomings of such technologies from a theoretical perspective. A...
详细信息
暂无评论