There have been numerous attempts at solving the optimal camera placement problem across multiple applications. Exact linear programming-based, as well as, heuristic combinatorial optimization methods were shown to be...
详细信息
There have been numerous attempts at solving the optimal camera placement problem across multiple applications. Exact linear programming-based, as well as, heuristic combinatorial optimization methods were shown to be successful in providing optimal or near-optimal solutions to this problem. Working over a discrete space model is the general practice when solving the camera placement problem. However, discretized environments often limit the methods' usage only to small-scale datasets due to resource and time constraints that grow exponentially with the number of 3D points collected from the discrete space. We propose a multi-resolution approach that enables the usage of existing optimization algorithms on large real-world problems modelled using high resolution 3D grids. Our method works by grouping together the given discrete set of possible camera locations into clusters of points, multiple times, resulting in multiple resolution levels. Camera placement optimization is repeated for all resolution levels while propagating the optimized solution from low to high resolutions. Our experiments on both simulated and real data with grids of varying sizes show that using our multi-resolution approach, existing camera placement optimization methods can be used even on high resolution grids consisting of hundreds of thousands of points. Our results also show that the strategy of grouping points together by exploiting underlying 3D geometry to optimize camera poses is not only significantly faster than optimizing on the entire set of samples but, it also provides better camera coverage.
We introduce and study l(p)-NORM-MULTIWAY-CUT: the input here is an undirected graph with non-negative edge weights along with k terminals and the goal is to find a partition of the vertex set into k parts each contai...
详细信息
We introduce and study l(p)-NORM-MULTIWAY-CUT: the input here is an undirected graph with non-negative edge weights along with k terminals and the goal is to find a partition of the vertex set into k parts each containing exactly one terminal so as to minimize the l(p)-norm of the cut values of the parts. This is a unified generalization of min-sum multiway cut (when p = 1) and min-max multiway cut (when p = infinity), both of which are well-studied classic problems in the graph partitioning literature. We show that l(p)-NORM-MULTIWAY-CUT is NP-hard for constant number of terminals and is NP-hard in planar graphs. On the algorithmic side, we design an O(log(1.5) n log(0.5) k)-approximation for all p >= 1. We also show an integrality gap of Omega(k(1-1/p)) for a natural convex program and an O(k(1-1)(/p-epsilon))-inapproximability for any constant epsilon > 0 assuming the small set expansion hypothesis.
Increasing the connectivity of a graph is a pivotal challenge in robust network design. The weighted connectivity augmentation problem is a common version of the problem that takes link costs into consideration. The p...
详细信息
Massive access is a critical design challenge of Internet of Things (IoT) networks. In this paper, we consider the grant-free uplink transmission of an IoT network with a multiple-antenna base station (BS) and a large...
详细信息
Massive access is a critical design challenge of Internet of Things (IoT) networks. In this paper, we consider the grant-free uplink transmission of an IoT network with a multiple-antenna base station (BS) and a large number of single-antenna IoT devices. Taking into account the sporadic nature of IoT devices, we formulate the joint activity detection and channel estimation (JADCE) problem as a group-sparse matrix estimation problem. This problem can be solved by applying the existing compressed sensing techniques, which however either suffer from high computational complexities or lack of algorithm robustness. To this end, we propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness for solving the JADCE problem. Specifically, we map the original iterative shrinkage thresholding algorithm (ISTA) into an unrolled recurrent neural network (RNN), thereby improving the convergence rate and computational efficiency through end-to-end training. Moreover, the proposed algorithm unrolling approach inherits the structure and domain knowledge of the ISTA, thereby maintaining the algorithm robustness, which can handle non-Gaussian preamble sequence matrix in massive access. With rigorous theoretical analysis, we further simplify the unrolled network structure by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolled deep neural network structures enjoy a linear convergence rate. Extensive simulations based on various preamble signatures show that the proposed unrolled networks outperform the existing methods in terms of the convergence rate, robustness and estimation accuracy.
This article considers a federated temporal difference (TD) learning algorithm and provides both asymptotic and finite-time analyses. To protect each worker agent's cost information from being acquired by possible...
详细信息
This article considers a federated temporal difference (TD) learning algorithm and provides both asymptotic and finite-time analyses. To protect each worker agent's cost information from being acquired by possible attackers, we propose a privacy-preserving variant of the algorithm by adding perturbation to the exchanged information. We show the rigorous differential privacy guarantee by using moments accountant and derive an upper bound of the utility loss for the privacy-preserving algorithm. Evaluations are also provided to corroborate the efficiency of the algorithms.
Modern search services often provide multiple options to rank the search results, e.g., sort "by relevance", "by price" or "by discount" in e-commerce. While the traditional rank by relev...
详细信息
Modern search services often provide multiple options to rank the search results, e.g., sort "by relevance", "by price" or "by discount" in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, epsilon-Filtering, which, given an allowed approximation error epsilon, finds a (1-epsilon)-optimal filtering, i.e., the relevance of its solution is at least (1-epsilon) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while epsilon-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that epsilon-Filtering can achieve quasi-opt
This work focuses on providing robust line-of-sight (LoS) spatial multiplexing at flexible communications distances and directions. Considering oblique LoS uniform linear arrays, we first derive the rank-deficient and...
详细信息
This work focuses on providing robust line-of-sight (LoS) spatial multiplexing at flexible communications distances and directions. Considering oblique LoS uniform linear arrays, we first derive the rank-deficient and orthogonal conditions for the LoS MIMO channel matrices. With this discovery, the topology of high spatial-resolution on one side of the link is shown with a wide full-rank-channel guarantee interval over distance and direction variations. Additionally, to reduce the implementation costs and power consumption, we propose to use low amplitude-resolution quantizers at the side of high spatial-resolution. With numerical evaluations on systems having few-bit analog-to-digital converters (ADCs), the proposed system design is shown to simultaneously achieve a higher spectrum efficiency and higher energy efficiency compared to a conventional single-stream high-amplitude-resolution over a wide signal-to-noise-ratio (SNR) range. Furthermore, we investigate channel equalization under the extreme case of using 1-bit ADCs. After providing a new viewpoint on the generalized approximate message passing (GAMP) algorithm from constrained Bethe free energy minimization, our simulations on bit-error-rates show that the GAMP algorithm can significantly reduce the performance degradation due to coarse quantization and can significantly outperform the Bussgang decomposition based linear minimum-mean-square-error estimator, especially at high SNRs.
We study the discrete Bamboo Garden Trimming problem (BGT), where we are given n bamboos with different growth rates. At the end of each day, one can cut down one bamboo to height zero. The goal in BGT is to make a pe...
详细信息
We study the discrete Bamboo Garden Trimming problem (BGT), where we are given n bamboos with different growth rates. At the end of each day, one can cut down one bamboo to height zero. The goal in BGT is to make a perpetual schedule of cuts such that the height of the tallest bamboo ever is minimized. Here, we improve the current best approximation guarantee by designing a 12/7 approximation algorithm. (C) 2021 Elsevier B.V. All rights reserved.
Reinforcement learning algorithms, such as hindsight experience replay (HER) and hindsight goal generation (HGG), have been able to solve challenging robotic manipulation tasks in multigoal settings with sparse reward...
详细信息
Reinforcement learning algorithms, such as hindsight experience replay (HER) and hindsight goal generation (HGG), have been able to solve challenging robotic manipulation tasks in multigoal settings with sparse rewards. HER achieves its training success through hindsight replays of past experience with heuristic goals but underperforms in challenging tasks in which goals are difficult to explore. HGG enhances HER by selecting intermediate goals that are easy to achieve in the short term and promising to lead to target goals in the long term. This guided exploration makes HGG applicable to tasks in which target goals are far away from the object's initial position. However, the vanilla HGG is not applicable to manipulation tasks with obstacles because the Euclidean metric used for HGG is not an accurate distance metric in such an environment. Although, with the guidance of a handcrafted distance grid, grid-based HGG can solve manipulation tasks with obstacles, a more feasible method that can solve such tasks automatically is still in demand. In this article, we propose graph-based hindsight goal generation (G-HGG), an extension of HGG selecting hindsight goals based on shortest distances in an obstacle-avoiding graph, which is a discrete representation of the environment. We evaluated G-HGG on four challenging manipulation tasks with obstacles, where significant enhancements in both sample efficiency and overall success rate are shown over HGG and HER. Videos can be viewed at https://***/ghgg.
The minimum vertex cover problem(MVCP)is a well-known combinatorial optimization problem of graph *** MVCP is an NP(nondeterministic polynomial)complete problem and it has an exponential growing complexity with respec...
详细信息
The minimum vertex cover problem(MVCP)is a well-known combinatorial optimization problem of graph *** MVCP is an NP(nondeterministic polynomial)complete problem and it has an exponential growing complexity with respect to the size of a *** algorithm exits till date that can exactly solve the problem in a deterministic polynomial time ***,several algorithms are proposed that solve the problem approximately in a short polynomial time *** algorithms are useful for large size graphs,for which exact solution of MVCP is impossible with current computational *** MVCP has a wide range of applications in the fields like bioinformatics,biochemistry,circuit design,electrical engineering,data aggregation,networking,internet traffic monitoring,pattern recognition,marketing and franchising *** work aims to solve the MVCP approximately by a novel graph decomposition *** decomposition of the graph yields a subgraph that contains edges shared by triangular edge structures.A subgraph is covered to yield a subgraph that forms one or more Hamiltonian cycles or *** order to reduce complexity of the algorithm a new strategy is also *** reduction strategy can be used for any algorithm solving *** on the graph decomposition and the reduction strategy,two algorithms are formulated to approximately solve the *** algorithms are tested using well known standard benchmark *** key feature of the results is a good approximate error ratio and improvement in optimum vertex cover values for few graphs.
暂无评论