This article develops a distributed sign gradient-free algorithm for simultaneous source localization and formation of a multirobot system. A distinguished feature of the algorithm is that it takes into account robots...
详细信息
This article develops a distributed sign gradient-free algorithm for simultaneous source localization and formation of a multirobot system. A distinguished feature of the algorithm is that it takes into account robots' measure noise as well as ternary communication, which significantly reduces the communication cost of the overall network. Considering the presence of noise, the algorithm is designed to be gradient-free. In addition, an inherent connection is established between the selection of control parameters and the convergence property of the sign gradient-free algorithm, as well as signal strength, noisy intensity, formation radius, and the number of informed robots.
Spiral plate heat exchangers (SPHX) play prominent role in process industries with the fact that rotational motion of the fluid in the channel eliminates the occurrence of fouling. In the present study, thermo-economi...
详细信息
In this paper, we consider the distributed bandit convex optimization of time-varying objective functions over a network. By introducing perturbations into the objective functions, we design a deterministic difference...
详细信息
In this paper, we consider the distributed bandit convex optimization of time-varying objective functions over a network. By introducing perturbations into the objective functions, we design a deterministic difference and a randomized difference to replace the gradient information of the objective functions and propose two classes of gradient-free distributed algorithms. We prove that both the two classes of algorithms achieve regrets of O(T-3/4) for convex objective functions and O(T-2/3) for strongly convex objective functions, with respect to the time index T and consensus of the estimates established as well. Simulation examples are given justifying the theoretical results.
This article investigates the problem of distributed optimization over multiagent networks with the global objective being the sum of a set of possibly nonconvex functions. Based on recent developments in distributed ...
详细信息
This article investigates the problem of distributed optimization over multiagent networks with the global objective being the sum of a set of possibly nonconvex functions. Based on recent developments in distributed average tracking as well as distributed extremum seeking, a distributed bounded gradient-free optimization algorithm is proposed. It is shown that the proposed scheme is able to solve nonconvex optimization problems with arbitrary prescribed accuracy. The relationship between the optimization error and control parameters is established with the error bound's explicit dependence on the bounds of agents' control inputs, which clearly demonstrates a tradeoff between the optimization error and input bound. An illustrative example is included to validate the effectiveness of proposed scheme.
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall los...
详细信息
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of Olog(T)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( \log (T)\right) $$\end{document} for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results.
In this work, we present a cluster-based learning and evolution optimizer (CLEO) for solving optimization problems. CLEO is a metaheuristic algorithm that uses cluster-based manipulation of the problem space during th...
详细信息
In this work, we present a cluster-based learning and evolution optimizer (CLEO) for solving optimization problems. CLEO is a metaheuristic algorithm that uses cluster-based manipulation of the problem space during the exploration phase, followed by fine-tuning solutions in the exploitation phase using updated knowledge of the problem space. We propose two approaches based on this new algorithm: one using only Latin hypercube sampling (LHS) and the other using LHS in combination with reservoir engineering insights. In addition to ensuring realistic simulation scenarios, we employed intuitive engineering insights to reveal how empirical knowledge enhances efficiency. Also, we propose simulating the partial life instead of the complete lifespan in the second approach. Technical results obtained at the end of this period are processed and used to find the optimized field development plan (FDP). We conducted both deterministic and probabilistic studies to assess the performance of the proposed approaches for various decision variables, both numerous and restricted. We validated the algorithm by optimizing the FDP for a simple numerical simulation model and a giant field-scale model, and compared our approaches to four well-established optimizers (particle swarm optimization (PSO), differential evolution (DE), designed exploration controlled evolution (DECE), and iterative discrete Latin hypercube sampling method (IDLHC)) in terms of simulation time and objective function results. Overall, the comparison demonstrates the advantages of the newly proposed algorithm. The results indicate that our first approach performs as well as any well-established optimizer, notably when working with large scale optimization problems. The second approach has slightly lower objective function results than the first one, but it is the most efficient among the compared algorithms, as the best FDP can be obtained by covering as little as 40% of the field's life. This attribute makes it an e
Continuous control has attracted enormous attention due to its essential role in real-world applications. However, it is considerably difficult to be addressed through explicitly modeling in practice. As promising app...
详细信息
Continuous control has attracted enormous attention due to its essential role in real-world applications. However, it is considerably difficult to be addressed through explicitly modeling in practice. As promising approaches, model-free policy gradient (PG) based methods in reinforcement learning (RL), however, suffer from slow convergence and complex computation owing to the high variance of gradient estimating and sophisticated backpropagation. Therefore, in this paper, a gradient-free policy gradientalgorithm with PSO-based parameter exploration (PG-PSOPE) is proposed for continuous control tasks. To reduce variance and improve convergence rate, the PSO is combined with PG to provide a novel way for training policy network in RL. Experimental results of simulated physical control tasks verify the effectiveness of the proposed algorithm. Besides, the PG-PSOPE is superior in both convergence speed and final performance to the typical on-policy PG and the off-policy deep RL method. Furthermore, the PG-PSOPE exhibits the simplicity and high effectiveness by comparison of training time under different tasks, and its running time is reduced by 58 times compared with other gradient-based methods for the best case.
This paper studies an online optimization problem, where the cost function at every time stage is summation of a group of local cost functions assigned to to a single agent/node in a multi-agent network. We propose a ...
详细信息
ISBN:
(纸本)9789881563972
This paper studies an online optimization problem, where the cost function at every time stage is summation of a group of local cost functions assigned to to a single agent/node in a multi-agent network. We propose a distributed algorithm by combining the gradient descent method and consensus design. Then we prove that the regret of our online optimization algorithm as well as the accumulative disagreement for the multi-agent network is sublinear under a properly chosen stepsize.
PurposeParametric images obtained from kinetic modeling of dynamic positron emission tomography (PET) data provide a new way of visualizing quantitative parameters of the tracer kinetics. However, due to the high nois...
详细信息
PurposeParametric images obtained from kinetic modeling of dynamic positron emission tomography (PET) data provide a new way of visualizing quantitative parameters of the tracer kinetics. However, due to the high noise level in pixel-wise image-driven time-activity curves, parametric images often suffer from poor quality and accuracy. In this study, we propose an indirect parameter estimation framework which aims to improve the quality and quantitative accuracy of parametric images. MethodsThree different approaches related to noise reduction and advanced curve fitting algorithm are used in the proposed framework. First, dynamic PET images are denoised using a kernel-based denoising method and the highly constrained backprojection technique. Second, gradient-free curve fitting algorithms are exploited to improve the accuracy and precision of parameter estimates. Third, a kernel-based post-filtering method is applied to parametric images to further improve the quality of parametric images. Computer simulations were performed to evaluate the performance of the proposed framework. Results and conclusionsThe simulation results showed that when compared to the Gaussian filtering, the proposed denoising method could provide better PET image quality, and consequentially improve the quality and quantitative accuracy of parametric images. In addition, gradient-free optimization algorithms (i.e., pattern search) can result in better parametric images than the gradient-based curve fitting algorithm (i.e., trust-region-reflective). Finally, our results showed that the proposed kernel-based post-filtering method could further improve the precision of parameter estimates while maintaining the accuracy of parameter estimates.
This paper studies an online optimization problem, where the cost function at every time stage is summation of a group of local cost functions assigned to to a single agent/node in a multi-agent network. We propose a ...
详细信息
This paper studies an online optimization problem, where the cost function at every time stage is summation of a group of local cost functions assigned to to a single agent/node in a multi-agent network. We propose a distributed algorithm by combining the gradient descent method and consensus design. Then we prove that the regret of our online optimization algorithm as well as the accumulative disagreement for the multi-agent network is sublinear under a properly chosen stepsize.
暂无评论