Vehicular Edge Computing (VEC) systems exploit resources on both vehicles and Roadside Units (RSUs) to provide services for real-time vehicular applications that cannot be completed in the vehicles alone. Two types of...
详细信息
Vehicular Edge Computing (VEC) systems exploit resources on both vehicles and Roadside Units (RSUs) to provide services for real-time vehicular applications that cannot be completed in the vehicles alone. Two types of decisions are critical for VEC: one is for task offloading to migrate vehicular tasks to suitable RSUs, and the other is for resource allocation at the RSUs to provide the optimal amount of computational resource to the migrated tasks under constraints on response time and energy consumption. Most of the published optimization-based methods determine the optimal solutions of the two types of decisions jointly within one optimization problem at RSUs, but the complexity of solving the optimization problem is extraordinary, because the problem is not convex and has discrete variables. Meanwhile, the nature of centralized solutions requires extra information exchange between vehicles and RSUs, which is challenged by the additional communication delay and security issues. The contribution of this paper is to decompose the joint optimization problem into two decoupled subproblems: task offloading and resource allocation. Both subproblems are reformulated for efficient solutions. The resource allocation problem is simplified by dual decomposition and can be solved at vehicles in a decentralized way. The task offloading problem is transformed from a discrete problem to a continuous convex one by a probability-based solution. Our new method efficiently achieves a near-optimal solution through decentralizedoptimizations, and the error bound between the solution and the true optimum is analyzed. Simulation results demonstrate the advantage of the proposed approach.
This article considers a problem of solving the optimal solution of the sum of locally convex cost functions over an undirected network. Each local convex cost function in the network is accessed only by each unit. To...
详细信息
This article considers a problem of solving the optimal solution of the sum of locally convex cost functions over an undirected network. Each local convex cost function in the network is accessed only by each unit. To be able to reduce the amount of computation and get the desired result in an accelerated way, we put forward a fresh accelerated decentralized event-triggered algorithm, named as A-DETA, for the optimization problem. A-DETA combines gradient tracking and two momentum accelerated terms, adopts nonuniform step-sizes and emphasizes that each unit interacts with neighboring units independently only at the sampling time triggered by the event. On the premise of assuming the smoothness and strong convexity of the cost function, it is proved that A-DETA can obtain the exact optimal solution linearly in the event of sufficiently small positive step-size and momentum coefficient. Moreover, an explicit linear convergence speed is definitely shown. Finally, extensive simulation example validates the usability of A-DETA.
In this article, we study the discrete-time decentralizedoptimization problems of multiagent systems by an event-triggering interaction scheme, in which each agent privately knows its local convex cost function, and ...
详细信息
In this article, we study the discrete-time decentralizedoptimization problems of multiagent systems by an event-triggering interaction scheme, in which each agent privately knows its local convex cost function, and collectively minimizes the total cost functions. The underlying interaction and the corresponding weight matrix are required to be undirected connected and doubly stochastic, respectively. To resolve this optimization problem collaboratively, we propose a decentralized event-triggering algorithm (DETA) that is based on the consensus theory and inexact gradient tracking technique. DETA involves each agent interacting with its neighboring agents only at some independent event-triggering sampling time instants. Under the assumptions that the global convex cost function is coercive and has Lipschitz continuous gradient, we prove that DETA steers all agents' states to an optimal solution even with nonuniform constant step sizes. Moreover, our analysis also shows that DETA converges at a rate of O(1/root t) if the step sizes are uniform and do not exceed some upper bounds. We illustrate the effectiveness of DETA on a canonical simple decentralized parameter estimation problem.
decentralized stochastic gradient methods play significant roles in large-scale optimization that finds many practical applications in signal processing, machine learning, and coordinated control. In this short commun...
详细信息
decentralized stochastic gradient methods play significant roles in large-scale optimization that finds many practical applications in signal processing, machine learning, and coordinated control. In this short communication, we focus on studying large-scale convexoptimization problems over multi-agent systems, where the mutual goal of agents in the network is to optimize a global objective function expressed as a sum of local objective functions. Each agent in the system uses only local computation and communication in the overall process without leaking their private information. We first introduce both the local stochastic averaging gradient method and the local stochastic variance-reduced gradient method into decentralized convex optimization (DCO) over unbalanced directed networks. Then, in order to increase the autonomy and flexibility of the agents, uncoordinated step-sizes are considered. Based on the wellknown variance-reduced stochastic gradient methods and uncoordinated step-sizes, we propose two fully decentralized algorithms named SAGA-UDN and SVRG-UDN for DCO over unbalanced directed networks, respectively. Finally, both SAGA-UDN and SVRG-UDN are numerically studied under various network parameters and objective functions. Some real-world data sets are used in simulations to demonstrate the improved performance of SAGA-UDN and SVRG-UDN. (C) 2020 Elsevier B.V. All rights reserved.
作者:
Mou, YutingXing, HaoLin, ZhiyunFu, MinyueZhejiang Univ
State Key Lab Ind Control Technol Hangzhou 310027 Zhejiang Peoples R China Zhejiang Univ
Coll Elect Engn Hangzhou 310027 Zhejiang Peoples R China Zhejiang Univ
State Key Lab Ind Control Technol Hangzhou 310027 Zhejiang Peoples R China Univ Newcastle
Sch Elect Engn & Comp Sci Callaghan NSW 2308 Australia Zhejiang Univ
State Key Lab Ind Control Technol Hangzhou 310027 Zhejiang Peoples R China Zhejiang Univ
Dept Control Sci & Engn Hangzhou 310027 Zhejiang Peoples R China
Plug-in hybrid electric vehicles (PHEV) are expected to become widespread in the near future. However, high penetration of PHEVs can overload the distribution system. In smart grid, the charging of PHEVs can be contro...
详细信息
Plug-in hybrid electric vehicles (PHEV) are expected to become widespread in the near future. However, high penetration of PHEVs can overload the distribution system. In smart grid, the charging of PHEVs can be controlled to reduce the peak load, known as demand-side management (DSM). In this paper, we focus on the DSM for PHEV charging at low-voltage transformers (LVTs). The objective is to flatten the load curve of LVTs, while satisfying each consumer's requirement for their PHEV to be charged to the required level by the specified time. We first formulate this problem as a convexoptimization problem and then propose a decentralized water-filling-based algorithm to solve it. A moving horizon approach is utilized to handle the random arrival of PHEVs and the inaccuracy of the forecast nonPHEV load. We focus on decentralized solutions so that computational load can be shared by individual PHEV chargers and the algorithm is scalable. Numerical simulations are given to demonstrate the effectiveness of our algorithm.
Many applications in multiagent learning are essentially convexoptimization problems in which agents have only limited communication and partial information about the function being minimized (examples of such applic...
详细信息
ISBN:
(纸本)9780982657119
Many applications in multiagent learning are essentially convexoptimization problems in which agents have only limited communication and partial information about the function being minimized (examples of such applications include, among others, coordinated source localization, distributed adaptive filtering, control, and coordination). Given this observation, we propose a new non-hierarchical decentralized algorithm for the asymptotic minimization of possibly time-varying convex functions. In our method each agent has knowledge of a time-varying local cost function, and the objective is to minimize asymptotically a global cost function defined by the sum of the local functions. At each iteration of our algorithm, agents improve their estimates of a minimizer of the global function by applying a particular version of the adaptive projected subgradient method to their local functions. Then the agents exchange and mix their improved estimates using a probabilistic model based on recent results in weighted average consensus algorithms. The resulting algorithm is provably optimal and reproduces as particular cases many existing algorithms (such as consensus algorithms and recent methods based on the adaptive projected subgradient method). To illustrate one possible application, we show how our algorithm can be applied to coordinated acoustic source localization in sensor networks.
暂无评论