In the dispersion problem, a set of k co-located mobile robots must relocate themselves in distinct nodes of an unknown network. The network is modeled as an anonymous graph G = (V, E), where the graph's nodes are...
详细信息
ISBN:
(纸本)9783031210167;9783031210174
In the dispersion problem, a set of k co-located mobile robots must relocate themselves in distinct nodes of an unknown network. The network is modeled as an anonymous graph G = (V, E), where the graph's nodes are not labeled. The edges incident to a node v with degree d are labeled with port numbers in the range {0, 1,..., d - 1} at v. The robots have unique IDs in the range [0, L], where L >= k, and are initially placed at a source node s. Each robot knows only its ID, however, it does not know the IDs of the other robots or the values of L or k. The task of the dispersion was traditionally achieved based on the assumption of two types of communication abilities: (a) when some robots are at the same node, they can communicate by exchanging messages between them, and (b) any two robots in the network can exchange messages between them. This paper investigates whether this communication ability among co-located robots is absolutely necessary to achieve the dispersion. We established that even in the absence of the ability of communication, the task of the dispersion by a set of mobile robots can be achieved in a much weaker model, where a robot at a node v has the access of following very restricted information at the beginning of any round: (1) am I alone at v? (2) did the number of robots at v increase or decrease compared to the previous round? We propose a deterministic distributed algorithm that achieves the dispersion on any given graph G = (V, E) in time O (k log L + k(2) log Delta), where. is the maximum degree of a node in G. Further, each robot uses O(log L + log Delta) additional memory. We also prove that the task of the dispersion cannot be achieved by a set of mobile robots with o(log L + log Delta) additional memory.
In recent years, the collaborative operation of multiple unmanned aerial vehicles (UAVs) has been an important advancement in drone technology. The research on multi-UAV collaborative flight path planning has garnered...
详细信息
In recent years, the collaborative operation of multiple unmanned aerial vehicles (UAVs) has been an important advancement in drone technology. The research on multi-UAV collaborative flight path planning has garnered widespread attention in the drone field, demonstrating unique advantages in complex task execution, large-scale monitoring, and disaster response. As one of the core technologies of multi-UAV collaborative operations, the research and technological progress in trajectory planning algorithms directly impact the efficiency and safety of UAV collaborative operations. This paper first reviews the application and research progress of path-planning algorithms based on centralized and distributed control, as well as heuristic algorithms in multi-UAV collaborative trajectory planning. It then summarizes the main technical challenges in multi-UAV path planning and proposes countermeasures for multi-UAV collaborative planning in government, business, and academia. Finally, it looks to future research directions, providing ideas for subsequent studies in multi-UAV collaborative trajectory planning technology.
We study the cooperative data exchange problem for fully connected networks. In this problem, nodes make broadcast transmissions to recover a file consisting of K independent packets. Each node initially only possesse...
详细信息
We study the cooperative data exchange problem for fully connected networks. In this problem, nodes make broadcast transmissions to recover a file consisting of K independent packets. Each node initially only possesses a subset of the packets. We propose (d, K)-Basis Searching, a deterministic polynomial-time minimization approach, to calculate the minimum rate for this problem. (d, K)-Basis Searching has strictly reduced complexity compared with the state-of-the-art algorithms, which are based on submodular function minimization. We extend our algorithm to a generalized problem: the so-called successive local omniscience problem.
In this paper, we address the problem of transforming an ideal into normal position. We present a deterministic algorithm (based on linear algebra techniques) that finds a suitable linear change of variables to transf...
详细信息
In this paper, we address the problem of transforming an ideal into normal position. We present a deterministic algorithm (based on linear algebra techniques) that finds a suitable linear change of variables to transform a given zero-dimensional (not necessarily radical) ideal into normal position. The main idea of this algorithm is to achieve this position via a sequence of elementary linear changes;i.e. each change involves only two variables. Furthermore we give complete complexity analysis of all the algorithms described in this paper based on a sharp upper bound for the maximal number of required elementary linear changes in the special case of the bi-variate polynomial ideal. As an application of this investigation, we conclude the paper by giving a sharper complexity bound for computing a primary decomposition for a zero-dimensional ideal. (C) 2020 Elsevier B.V. All rights reserved.
We propose a new polynomial-time deterministic algorithm that produces an approximated solution for the travelling salesperson problem. The proposed algorithm ranks cities based on their priorities calculated using a ...
详细信息
We propose a new polynomial-time deterministic algorithm that produces an approximated solution for the travelling salesperson problem. The proposed algorithm ranks cities based on their priorities calculated using a power function of means and standard deviations of their distances from other cities and then connects the cities to their neighbours in the order of their priorities. When connecting a city, a neighbour is selected based on their neighbours' priorities calculated as another power function that additionally includes their distance from the focal city to be connected. This repeats until all the cities are connected into a single loop. The time complexity of the proposed algorithm is O(n(2)), where n is the number of cities. Numerical evaluation shows that, despite its simplicity, the proposed algorithm produces shorter tours with less time complexity than other conventional tour construction heuristics. The proposed algorithm can be used by itself or as an initial tour generator for other more complex heuristic optimisation algorithms.
In the paper, we propose a deterministic approximation algorithm for maximizing a generalized monotone submodular function subject to a matroid constraint. The function is generalized through a curvature parameter c i...
详细信息
ISBN:
(数字)9783030592677
ISBN:
(纸本)9783030592660;9783030592677
In the paper, we propose a deterministic approximation algorithm for maximizing a generalized monotone submodular function subject to a matroid constraint. The function is generalized through a curvature parameter c is an element of[0, 1], and essentially reduces to a submodular function when c = 1. Our algorithm employs the deterministic approximation devised by Buchbinder et al. [3] for the c = 1 case of the problem as a building block, and eventually attains an approximation ratio of 1+g(c)(x)+ Delta center dot [3+c-( 2+c)x-( 1+c)g(c)(x)] / 2+c+(1+c)(1-x) for the curvature parameter c is an element of[0, 1] and for a calibrating parameter that is any x is an element of[0, 1]. For c = 1, the ratio attains 0.5008 by setting x = 0.9, coinciding with the renowned performance guarantee of the problem. Moreover, when the submodular set function degenerates to a linear function, our generalized algorithm always produces optimum solutions and thus achieves an approximation ratio 1.
We investigated nearest-neighbor density-based clustering for hyperspectral image analysis. Four existing techniques were considered that rely on a K-nearest neighbor (KNN) graph to estimate local density and to propa...
详细信息
We investigated nearest-neighbor density-based clustering for hyperspectral image analysis. Four existing techniques were considered that rely on a K-nearest neighbor (KNN) graph to estimate local density and to propagate labels through algorithm-specific labeling decisions. We first improved two of these techniques, a KNN variant of the density peaks clustering method dpc, and a weighted-mode variant of knnclust, so the four methods use the same input KNN graph and only differ by their labeling rules. We propose two regularization schemes for hyperspectral image analysis: (i) a graph regularization based on mutual nearest neighbors (MNN) prior to clustering to improve cluster discovery in high dimensions;(ii) a spatial regularization to account for correlation between neighboring pixels. We demonstrate the relevance of the proposed methods on synthetic data and hyperspectral images, and show they achieve superior overall performances in most cases, outperforming the state-of-the-art methods by up to 20% in kappa index on real hyperspectral images.
We study the problem of computing a minimum cut in a simple, undirected graph and give a deterministic O(m log(2) n log log(2) n) time algorithm. This improves on both the best previously known deterministic running t...
详细信息
We study the problem of computing a minimum cut in a simple, undirected graph and give a deterministic O(m log(2) n log log(2) n) time algorithm. This improves on both the best previously known deterministic running time of O(m log(12) n) (Kawarabayashi and Thorup [J. ACM, 66 (2018), 41) and the best previously known randomized running time of O(m log(3) n) (Karger [J. ACM, 47 (2000), pp. 46-761) for this problem, though Karger's algorithm can be further applied to weighted graphs. Moreover, our result extends to balanced directed graphs, where the balance of a directed graph captures how close the graph is to being Eulerian. Our approach is using the Kawarabayashi and Thorup graph compression technique, which repeatedly finds low conductance cuts. To find these cuts they use a diffusion-based local algorithm. We use instead a flow-based local algorithm and suitably adjust their framework to work with our flow-based subroutine. Both flow-and diffusion-based methods have a long history of being applied to finding low conductance cuts. Diffusion algorithms have several variants that are naturally local, while it is more complicated to make flow methods local. Some prior work has proven nice properties for local flow-based algorithms with respect to improving or cleaning up low conductance cuts. Our flow subroutine, however, is the first that both is local and produces low conductance cuts. Thus, it may be of independent interest.
Distance transforms (DTs) are standard tools in image analysis, with applications in image registration and segmentation. The DT is based on extremal (minimal) distance values and is therefore highly sensitive to nois...
详细信息
Distance transforms (DTs) are standard tools in image analysis, with applications in image registration and segmentation. The DT is based on extremal (minimal) distance values and is therefore highly sensitive to noise. We present astochastic distance transform(SDT) based ondiscrete random sets, in which a model of element-wise probability is utilized and the SDT is computed as the first moment of the distance distribution to the random set. We present two methods for computing the SDT and analyze them w.r.t. accuracy and complexity. Further, we propose a method, utilizing kernel density estimation, for estimating probability functions and associated random sets to use with the SDT. We evaluate the accuracy of the SDT and the proposed framework on images of thin line structures and disks corrupted by salt and pepper noise and observe excellent performance. We also insert the SDT into a segmentation framework and apply it to overlapping objects, where it provides substantially improved performance over previous methods. Finally, we evaluate the SDT and observe very good performance, on simulated images from localization microscopy, a state-of-the-art super-resolution microscopy technique which yields highly spatially localized but noisy point-clouds.
Present work performs a numerical study about the energy removal in a heated tubular array submitted to an external flow. Taking into account that a large variety of tubular arrangements can exists, in this work it is...
详细信息
Present work performs a numerical study about the energy removal in a heated tubular array submitted to an external flow. Taking into account that a large variety of tubular arrangements can exists, in this work it is developed a tubular array that does not use any kind of initial predefined arrangement. Using the principles of Constructal Theory (or Law) and a positioning function dependent on the velocity and temperature fields, it is calculated, in a deterministic way, the location where each tube should be positioned. Pressure drop is not taken in to account in this first algorithm implementation as a performance indicator. In order to validate the proposed methodology, the Constructal Array is compared with standard aligned and staggered arrangements suggested in literature. The minimum distance between tubes (p) is considered as a degree of freedom. Four variations are studied: p = 1D, p = 1.25D, p = 1.5D and p = 2D, where D is the tube diameter. It is considered here the simulation of a transient, incompressible and laminar flow of a Newtonian fluid in a two-dimensional domain with forced convection and Prandtl number equal to 0.71. Results are computed when the flow reaches to the steady state condition. It is evaluated three values for Reynolds number: Re-D = 10, 50 and 100. Thermal analysis of the formed patterns has shown that best thermal performance was not obtained with p = 1D, neither with p = 2D, i.e., tube distance has an influence on its formation and, consequently, on the heat transfer exchange. In all performed analyzes, Constructal arrays configuration showed higher energy removal (up to 71%) than the aligned and staggered arrangements, which highlights the capacity of proposed method. Constructal Array for p = 1.5D showed the most homogeneous distribution and proved to be the most effective in 4 of the six studied cases.
暂无评论