Edge Computing and Network Function Virtualization (NFV) concepts can improve network processing and multi-resources allocation when intelligent optimization algorithms are deployed. Multiservice offloading and alloca...
详细信息
Edge Computing and Network Function Virtualization (NFV) concepts can improve network processing and multi-resources allocation when intelligent optimization algorithms are deployed. Multiservice offloading and allocation approaches pose interesting challenges in the current and next-generation vehicle networks. The state-of-the-art optimization approaches still formulate exact algorithms, and tune approximation methods to get sufficient solutions. These approaches are data-centric that aim to use heterogeneous data inputs to find the near optimal solutions. In the context of connected and autonomous vehicles (CAVs), these techniques show an exponential computational time and deal only with small and medium scale networks. Therefore, we are motivated by using recent Deep Reinforcement Learning (DRL) techniques to learn the behavior of exact optimization algorithms while enhancing the Quality of Service (QoS) of network operators and satisfying the requirements of the next-generation Autonomous Vehicles (AVs). DRL algorithms can improve AVs service offloading and optimize edge resources. An Optimal Virtual Edge Autopilot Placement (OVEAP) algorithm is proposed using Integer Linear Programming (ILP). Moreover, an autopilot placement protocol is presented to support the algorithm. Optimal allocation and Virtual Network Function (VNF) placement and chaining of the autopilot, based on several new constraints such as computing and networking loads, network edge infrastructure, and placement cost, are designed. Further, a DRL approach is formulated to deal with dense Internet of Autonomous Vehicle (IoAV) networks. Extensive simulations and evaluations are carried out. Results show that the proposed allocation strategies outperform the state-of-the-art solutions and give better performance in terms of Total Edge Servers Utilization, Total Edge Servers Allocation Time, and Successfully Allocated autopilots.
XRF overlapping spectra analysis method based on a cascade equilibrium optimizer and trust region algorithm is proposed to fit overlapping spectra peaks. Experiment results show that performance is better than that of...
详细信息
Computational optimization and optimization algorithms form a cornerstone area of computer science that has been extensively explored due to its myriad applications in many practical and real-world scenarios. At a hig...
详细信息
With the increase of clean energy demand, the electro-hydrogen coupling system has gradually become an important way to achieve energy structure optimization due to its potential in power peak shaving, energy storage ...
详细信息
In the past few years, photovoltaic production has significantly increased worldwide and has become a necessary element for achieving global agreements to minimize carbon dioxide emissions. Therefore, a precise and re...
详细信息
Real-world optimization problems often have multi-ple conflicting objective functions to be optimized simultaneously. In some of them, there are different Pareto optimal solutions with the same objective function valu...
详细信息
Decentralized learning recently has received increasing attention in machine learning due to its advantages in implementation simplicity and system robustness, data privacy. Meanwhile, the adaptive gradient methods sh...
详细信息
Decentralized learning recently has received increasing attention in machine learning due to its advantages in implementation simplicity and system robustness, data privacy. Meanwhile, the adaptive gradient methods show superior performances in many machine learning tasks such as training neural networks. Although some works focus on studying decentralized optimization algorithms with adaptive learning rates, these adaptive decentralized algorithms still suffer from high sample complexity. To fill these gaps, we propose a class of faster adaptive decentralized algorithms (i.e., AdaMDOS and AdaMDOF) for distributed nonconvex stochastic and finite-sum optimization, respectively. Moreover, we provide a solid convergence analysis framework for our methods. In particular, we prove that our AdaMDOS obtains a near-optimal sample complexity of Õ(ϵ-3) for finding an ϵ-stationary solution of nonconvex stochastic optimization. Meanwhile, our AdaMDOF obtains a near-optimal sample complexity of O(√nϵ-2) for finding an ϵ-stationary solution of for nonconvex finite-sum optimization, where n denotes the sample size. To the best of our knowledge, our AdaMDOF algorithm is the first adaptive decentralized algorithm for nonconvex finite-sum optimization. Some experimental results demonstrate efficiency of our algorithms. Copyright 2024 by the author(s)
The decomposition-based multi-objective evolutionary algorithm (MOEA/D) is an efficient framework for solving multi-objective optimization problems. It decomposes a multi-objective optimization problem into a set of s...
详细信息
Recently, there has been a surge in interest in developing optimization algorithms for overparameterized models as achieving generalization is believed to require algorithms with suitable biases. This interest centers...
详细信息
Recently, there has been a surge in interest in developing optimization algorithms for overparameterized models as achieving generalization is believed to require algorithms with suitable biases. This interest centers on minimizing sharpness of the original loss function;the Sharpness-Aware Minimization (SAM) algorithm has proven effective. However, most literature only considers a few sharpness measures, such as the maximum eigenvalue or trace of the training loss Hessian, which may not yield meaningful insights for non-convex optimization scenarios like neural networks. Additionally, many sharpness measures are sensitive to parameter invariances in neural networks, magnifying significantly under rescaling parameters. Motivated by these challenges, we introduce a new class of sharpness measures in this paper, leading to new sharpness-aware objective functions. We prove that these measures are universally expressive, allowing any function of the training loss Hessian matrix to be represented by appropriate hyperparameters. Furthermore, we show that the proposed objective functions explicitly bias towards minimizing their corresponding sharpness measures, and how they allow meaningful applications to models with parameter invariances (such as scale-invariances). Finally, as instances of our proposed general framework, we present Frob-SAM and Det-SAM, which are specifically designed to minimize the Frobenius norm and the determinant of the Hessian of the training loss, respectively. We also demonstrate the advantages of our general framework through extensive experiments. Copyright 2024 by the author(s)
In the field of continuous single-objective black-box optimization, understanding the varying performances of algorithms across different problem instances is crucial. A recent approach based on the concept of "a...
详细信息
暂无评论