This article regards autonomous aerial vehicle-assisted aerial edge computing as a dynamic multiobjective optimization problem. In order to continuously track the moving Pareto set, a new Holt-based prediction correct...
详细信息
This article regards autonomous aerial vehicle-assisted aerial edge computing as a dynamic multiobjective optimization problem. In order to continuously track the moving Pareto set, a new Holt-based prediction correction dynamic multiobjective evolutionary algorithm (HDMOEA) is proposed. It includes mainly three main strategies. First, the Wilcoxon signed-rank test method is employed to accurately detect environmental change, and the intensity of which is further detected by a new environment perception operator. Second, the Holt-based prediction correction mechanism is constructed to predict the positions of individuals in the next time window. The positions are corrected according to a reference point in order to enhance prediction accuracy and accelerate the search speed of the algorithm. Lastly, a new bi-mutation method is proposed used to maintaining the diversity of the population according to the intensity of environmental changes, thereby reduce the likelihood of the population falling into local optima. The proposed algorithm is compared with six state-of-the-art prediction dynamic multiobjective algorithms on the multiple benchmark test sets. The experimental results show that HDMOEA can faster continuous tracking Pareto frontier, and obtain more accurate Pareto frontier set compared with other comparison algorithms.
Determination of the decision variables such as the inspection period, number of measurements, and sample size is crucial for planning an efficient degradation test. For widely used stochastic processes, the necessary...
详细信息
Determination of the decision variables such as the inspection period, number of measurements, and sample size is crucial for planning an efficient degradation test. For widely used stochastic processes, the necessary and sufficient conditions for the explicit expression of optimal decision variables can be derived by minimizing the approximate variance of an estimator of interest under a limited budget. The importance of the decision variable is proposed to study the rate at which the objective function improves with the decision variable. The necessary and sufficient conditions for determining the importance of the optimal decision variables are theoretically investigated to elucidate the effect of the experimental costs and model parameters. Furthermore, the relative rankings of the importance of the optimal decision variables are illustrated through numerical examples.
作者:
Hou, RuijieYu, YangLi, XiuxianTongji Univ
Coll Elect & Informat Engn Dept Control Sci & Engn Shanghai 201800 Peoples R China Tongji Univ
Shanghai Res Inst Intelligent Autonomous Syst Frontiers Sci Ctr Intelligent Autonomous Syst Natl Key Lab Autonomous Intelligent Unmanned Syst Shanghai 201210 Peoples R China Tongji Univ
Shanghai Inst Intelligent Sci & Technol Shanghai 201210 Peoples R China
This article focuses on online composite optimization over multiagent networks. In the distributed setting, each agent has its own local loss function, which consists of a convex, strongly convex or strongly convex an...
详细信息
This article focuses on online composite optimization over multiagent networks. In the distributed setting, each agent has its own local loss function, which consists of a convex, strongly convex or strongly convex and smooth function, and a time-varying nonsmooth regularizer. Two distributed online algorithms are proposed and corresponding dynamic regrets are analyzed. Two proposed algorithms are based on signs of relative states. The first algorithm obtains O(root T(C-T + 1)) dynamic regret bound when each local loss is a general convex composite function, where C-T is the path variation. If C-T can be estimated in advance for convex or strongly convex local loss with a time-varying nonsmooth regularizer, then dynamic regret bounds are, respectively, in the order of O(root T(C-T + 1)) and O(logT(1 + C-T)). The second algorithm is based on the first one, especially for handling the local loss composed of a strongly convex and smooth function with a nonsmooth regularizer, and then obtains O(1 + C-T) dynamic regret bound. In the end, numerical results are given to support the theoretical findings.
The development of radar sensors toward intelligent and adaptive is a major trend in the future. Adaptive waveform design is an important part to improve the performance of the sensor. This article investigates the si...
详细信息
The development of radar sensors toward intelligent and adaptive is a major trend in the future. Adaptive waveform design is an important part to improve the performance of the sensor. This article investigates the signal waveform design problem of carrier-free ultra-wideband (UWB) radar sensors. The problem requires designing waveforms to detect and estimate extended targets in the presence of interference and clutter while imposing practical constraints on the designed signals, such as constant modulus or low peak-to-average power ratio (PAR). Since the resulting problem is high-dimensional and nonconvex (also known as NP-hard), finding the globally optimal solution through polynomial time algorithms is extremely challenging. We notice that deep convolutional neural networks (CNNs) are fundamentally nonlinear systems, making them well-suited for addressing the aforementioned problem. To this end, we propose a deep learning-based waveform design approach. This method is based on improved deep residual networks, utilizing the optimization problem construction in waveform optimization to form the loss function, and combining the Adam algorithm and the loss function to drive the optimization of deep residual networks. By leveraging the capability of deep convolutional networks to solve nonconvex problems, we achieve fast adaptive design of UWB sensor waveforms. Numerical examples are provided to validate the effectiveness of this approach.
Over the years, a number of methods have been proposed to forecast the unknown inner-cell values of a set of related RxC contingency tables when only their margins are known. This is a classical problem that emerges i...
详细信息
Over the years, a number of methods have been proposed to forecast the unknown inner-cell values of a set of related RxC contingency tables when only their margins are known. This is a classical problem that emerges in many areas, from economics to quantitative history, being particularly ubiquitous when dealing with electoral data in sociology and political science. However, the two current major algorithms to solve this problem, based on Bayesian statistics and iterative linear programming depend on adjustable (hyper-)parameters and do not yield a unique solution: their estimates tend to fluctuate (when convergence is reached) around a stationary distribution. Within the linear programming framework, this paper proposes a new algorithm (lclphom) that always converges to a unique solution, having no adjustable parameters. This characteristic makes it easy to use and robust to claims of hacking. Furthermore, after assessing lclphom with real and simulated data, lclphom is found to yield estimates of (almost) similar accuracy to the current major solutions, being more preferable to the other lphom-family algorithms the more heterogeneous the row-fraction distributions of the tables are. Interested practitioners can easily use this new algorithm as it has been programmed in the R-package lphom.
Mixed integer linear programming (MILP) is an NP-hard problem, which can be solved by the branch and bound algorithm by dividing the original problem into several subproblems and forming a search tree. For each subpro...
详细信息
Mixed integer linear programming (MILP) is an NP-hard problem, which can be solved by the branch and bound algorithm by dividing the original problem into several subproblems and forming a search tree. For each subproblem, linear programming (LP) relaxation can be solved to find the bound for making the following decisions. Recently, with the increasing dimension of MILPs in different applications, how to accelerate the solution process becomes a huge challenge. In this survey, we summarize techniques and trends to speed up MILP solving from two perspectives. First, we present different approaches in simplex initialization, which can help to accelerate the solution of LP relaxation for each subproblem. Second, we introduce the learning-based technologies in branch and bound algorithms to improve decision making in tree search. We also propose several potential directions and extensions to further enhance the efficiency of solving different MILP problems.
As High Voltage Direct Current (HVDC) facilitates inter-regional power trading, the Network Flow (NF) based tie-line dispatching framework is employed for market clearing due to its effective quantification of transmi...
详细信息
As High Voltage Direct Current (HVDC) facilitates inter-regional power trading, the Network Flow (NF) based tie-line dispatching framework is employed for market clearing due to its effective quantification of transmission costs. However, the NF based framework encounters three limitations: the extensive model size, inefficient solution approach, and occasional infeasibility, which confine its applicability to large-scale problems. To address these concerns, a novel tie-line dispatching framework is proposed in this paper, including an enhanced modeling technique, an effective decomposition strategy, and an improved coordination algorithm to the optimization problem. Firstly, a dispatching model based on Minimum Power Unit (MPU) is introduced to eliminate the redundant information and reduce the scale of the problem. Secondly, a model decomposition strategy called Virtual Node Decoupling (VND) is presented to enhance computational efficiency. Lastly, a Checked Lagrangian Relaxation (CLR) algorithm is developed to rectify any infeasibility concerns. To validate the effectiveness of our proposed framework, a case study was conducted based on the IEEE-300 system. The outcomes demonstrate that the MPU-VND-CLR framework accurately acquires feasible dispatching solutions. Moreover, for large-scale problems with over 60,000 optimization variables, the model size experiences an impressive reduction by 1/8, while the solution time is significantly shortened to 1/3. This advancement makes it possible for accommodating a twenty-fold increase in market entities within the tie-line dispatching model.
Railway alignment design is a crucial but difficult task that should trade off many objective factors. Currently, although several Multi-objective Intelligent Alignment Optimization (M-IAO) methods have been proposed,...
详细信息
Railway alignment design is a crucial but difficult task that should trade off many objective factors. Currently, although several Multi-objective Intelligent Alignment Optimization (M-IAO) methods have been proposed, previous methods may still be limited by low convergence performance with more than three objectives. In response, a Many-objective Intelligent Alignment Optimization (Ma-IAO) method is presented in this paper. First, a six-objective Ma-IAO model is built considering three kinds of objective factors in railway design, namely economic, geologic and ecologic factors. Then, a Particle Swarm Optimization with Strengthened Pareto Dominance Analysis (SPDA-PSO) is developed to solve the model. Two types of external archives are primarily designed to store and update nondominated solutions during optimization with a specifically-proposed strengthened pareto dominance criterion (known as L-C-dominance). Afterward, two elite selection operators are devised to search for Pareto corner and knee points within external archives as the best particle individuals for guiding SPDA-PSO evolution. Lastly, the proposed method has been applied to a complex real-world railway example. Through comparisons with a contemporary M-IAO method and the manual work of human designers, its effectiveness is confirmed via detailed data analyses.
Differentially private histograms (DP-Histograms) are integral to data publication and privacy preservation efforts. However, conventional DP-Histograms often fail to preserve valid statistical information and the ess...
详细信息
Differentially private histograms (DP-Histograms) are integral to data publication and privacy preservation efforts. However, conventional DP-Histograms often fail to preserve valid statistical information and the essential characteristics of the original data. This paper shows the invalidity of variance is the inherent shortcomings in general DP-Histograms, and introduces a novel algorithm called the Differentially Private Histogram with Valid Statistics (VSDPH) to overcome the problem. The VSDPH, grounded in linear programming and bounded Lipschitz distance, efficiently generates DP histograms while preserving the valid statistics of the original data. Our theoretical analysis demonstrates that histograms produced by VSDPH maintain asymptotically valid variance, and we establish an upper bound based on the 1-Wasserstein distance. Through experiments, we validate that VSDPH can accurately hold the statistical characteristics of the original data. This capability brings the resulting histograms closer to the originals.
Error-correcting codes resilient to synchronization errors such as insertions and deletions are known as insdel codes. In this paper, we present several new combinatorial upper and lower bounds on the maximum size of ...
详细信息
Error-correcting codes resilient to synchronization errors such as insertions and deletions are known as insdel codes. In this paper, we present several new combinatorial upper and lower bounds on the maximum size of q-ary insdel codes. Our main upper bound is a sphere-packing bound obtained by solving a linear programming (LP) problem. It improves upon previous results for cases when the distance d or the alphabet size q is large. Our first lower bound is derived from a connection between insdel codes and matchings in special hypergraphs. This lower bound, together with our upper bound, shows that for fixed block length n and edit distance d, when q is sufficiently large, the maximum size of insdel codes is q(n-d/2+1)/((n)(d/2-1)) (1 +/- o (1)) . The second lower bound refines Alon et al.'s recent logarithmic improvement on Levenshtein's GV-type bound and extends its applicability to large q and d.
暂无评论