We consider the problem of selecting an optimal set of sensors to estimate the states of linear dynamical systems. Specifically, the goal is to choose (at design-time) a subset of sensors (satisfying certain budget co...
详细信息
We consider the problem of selecting an optimal set of sensors to estimate the states of linear dynamical systems. Specifically, the goal is to choose (at design-time) a subset of sensors (satisfying certain budget constraints) from a given set in order to minimize the trace of the steady state a priori or a posteriori error covariance produced by a Kalman filter. We show that the a priori and a posteriori error covariance-based sensor selection problems are both NP-hard, even under the additional assumption that the system is stable. We then provide bounds on the worst-case performance of sensor selection algorithms based on the system dynamics, and show that greedy algorithms are optimal for a certain class of systems. However, as a negative result, we show that certain typical objective functions are not submodular or supermodular in general. While this makes it difficult to evaluate the performance of greedy algorithms for sensor selection (outside of certain special cases), we show via simulations that these greedy algorithms perform well in practice. (C) 2016 Elsevier Ltd. All rights reserved.
We introduce BubbleScarch, a general approach for extending priority-based greedy heuristics. Following the framework recently developed by Borodin et al., we consider priority algorithms, which sequentially assign va...
详细信息
We introduce BubbleScarch, a general approach for extending priority-based greedy heuristics. Following the framework recently developed by Borodin et al., we consider priority algorithms, which sequentially assign values to elements in some fixed or adaptively determined order. BubbleSearch extends priority algorithms by selectively considering additional orders near an initial good ordering. While many notions of nearness are possible, we explore algorithms based on the Kendall-tau distance (also known as the BubbleSort distance) between permutations. Our contribution is to elucidate the BubbleSearch paradigm and experimentally demonstrate its effectiveness. (c) 2005 Published by Elsevier B.V.
For addressing the One-Dimensional Road side unit Deployment (D1RD) problem, a greedy approximate algorithm named greedy2P3E was proposed two years ago, and its approximation ratio was proved to be at least 2/3 for th...
详细信息
For addressing the One-Dimensional Road side unit Deployment (D1RD) problem, a greedy approximate algorithm named greedy2P3E was proposed two years ago, and its approximation ratio was proved to be at least 2/3 for the D1RD problem with EQual-radius RSUs (D1RD-EQ problem). Can better or even tight approximations for greedy2P3E be found? In this paper, approximation ratio of greedy2P3E is re-inspected and tight approximation ratio is found. To this end, a greedy algorithm named greedy3P4 is first proposed and proved to have a tight approximation ratio of 3/4 for the D1RD-EQ problem. Then, by using greedy3P4 as a bridge, 3/4 is also proved to be the tight approximation ratio of greedy2P3E and it is tight for all n >= 2. Comparative evaluations are performed on real cases using a real vehicle trajectory dataset. The results show that these greedy algorithms usually return near optimal solutions with a profit more than 98% of the optimal solutions, and the greedy algorithms well outperform the other typical algorithms tested.
Designing fast, distributed algorithms for multiagent problems is vital for many novel application domains. greedy algorithms have been shown in many multiagent contexts to be an efficient approach to arrive at good s...
详细信息
Designing fast, distributed algorithms for multiagent problems is vital for many novel application domains. greedy algorithms have been shown in many multiagent contexts to be an efficient approach to arrive at good solutions quickly. In this work, we ask the following: Is there any way to improve the performance of greedy algorithms without sacrificing the linear run-time guarantees? For this, we take inspiration from incentive design in the game-theoretic literature. In this work, we consider a modified version of the greedy algorithm where agents do not optimize against the global objective. Instead, each agent is prescribed an auxiliary utility function (which may differ from the original objective function) in which it optimizes under. By designing the utility functions properly, we show in this work that the resulting performance guarantees of the greedy algorithm can increase significantly. We study this approach in the context of resource-allocation games, which are used to model a rich variety of engineering applications. Interestingly, the performance guarantees from the modified greedy algorithm can be significantly close to the best centralized performance guarantees. The main technical contribution of the article is the characterization of the performance guarantees through a linear program construction.
In the last decade, extreme learning machine (ELM), which is a new learning algorithm for single-hidden layer feed forward networks (SLFNs), has gained much attention in the machine intelligence and pattern recognitio...
详细信息
In the last decade, extreme learning machine (ELM), which is a new learning algorithm for single-hidden layer feed forward networks (SLFNs), has gained much attention in the machine intelligence and pattern recognition communities with numerous successful real-world applications. The ELM structure has several advantageous such as good generalization performance with an extremely fast learning speed and low computational cost especially when dealing with many patterns defined in a high-dimensional space. However, three major problems usually appear using the ELM structure: (i) the dataset may have irrelevant variables, (ii) choosing the number of neurons in the hidden layer would be difficult, and (iii) it may encounter the singularity problem. To overcome these limitations, several methods have been proposed in the regularization framework. In this paper, we propose several sparse ELM schemes in which various greedy algorithms are used for sparse approximation of the output weights vector of the ELM network. In short, we name these new schemes as GA-SELM. We also investigate several greedy algorithms such as Compressive Sampling Matching Pursuit (CoSaMP), Iterative Hard Thresholding (IHT), Orthogonal Matching Pursuit (OMP) and Stagewise Orthogonal Matching Pursuit (StOMP) to obtain a regularized ELM scheme. These new ELM schemes have several benefits in comparing with the traditional ELM schemes such as low computational complexity, being free of parameter adjustment and avoiding the singularity problem. The proposed approach shows its significant advantages when it is compared with the empirical studies on nine commonly used regression benchmarks. Moreover, a comparison with the original ELM and the regularized ELM schemes is performed. (C) 2014 Elsevier Ltd. All rights reserved.
The problem of restoring deleted files from a scattered set of fragments arises often in digital forensics. File fragmentation is a regular occurrence in hard disks, memory cards, and other storage media. As a result,...
详细信息
The problem of restoring deleted files from a scattered set of fragments arises often in digital forensics. File fragmentation is a regular occurrence in hard disks, memory cards, and other storage media. As a result, a forensic analyst examining a disk may encounter many fragments of deleted digital files, but is unable to determine the proper sequence of fragments to rebuild the files. In this paper, we investigate the specific case where digital images are heavily fragmented and there is no file table information by which a forensic analyst can ascertain the correct fragment order to reconstruct each image. The image reassembly problem is formulated as a k-vertex disjoint graph problem and reassembly is then done by finding an optimal ordering of fragments. We provide techniques for comparing fragments and describe several algorithms for image reconstruction based on greedy heuristics. Finally, we provide experimental results showing that images can be reconstructed with high accuracy even when there are thousands of fragments and multiple images involved.
Linear systems where both the matrix and the right hand side are affected by noise or uncertainties are typically solved via a total least squares (TLS) approach. We study the sparse TLS problem, with solutions having...
详细信息
Linear systems where both the matrix and the right hand side are affected by noise or uncertainties are typically solved via a total least squares (TLS) approach. We study the sparse TLS problem, with solutions having s nonzero elements. We present a set of conditions under which the sparse least squares (LS) and TLS problems have the same support. The conditions are expressed in terms of the restricted isometry constants and the minimum nonzero principal angle between subspaces made of s columns of the matrix. We also study several greedy algorithms for solving the sparse TLS problem. Inspired by our theoretical results, we infer that an effective approach is to compute the support of the LS solution, then solve the standard TLS problem restrained to that support. Indeed, doing so with orthogonal matching pursuit (OMP) gives consistently good results. Another greedy strategy is to extend the support with the column that minimizes the smallest angle with the subspace (SAS) corresponding to the already chosen support, which is a combined LS-TLS heuristic. We show via simulations that SAS gives the best results for small problem sizes and high and medium signal-to-noise ratio (SNR), while OMP is best or nearly best in almost all cases. (C) 2012 Elsevier Inc. All rights reserved.
Orthogonal greedy algorithms are popular sparse signal reconstruction algorithms. Their principle is to select atoms one by one. A series of unconstrained least-square subproblems of gradually increasing size is solve...
详细信息
Orthogonal greedy algorithms are popular sparse signal reconstruction algorithms. Their principle is to select atoms one by one. A series of unconstrained least-square subproblems of gradually increasing size is solved to compute the approximation coefficients, which is efficiently performed using a fast recursive update scheme. When dealing with non-negative sparse signal reconstruction, a series of non-negative least-squares subproblems have to be solved. Fast implementation becomes tricky since each subproblem does not have a closed-form solution anymore. Recently, non-negative extensions of the classical orthogonal matching pursuit and orthogonal least squares algorithms were proposed, using slow (i.e., non-recursive) or recursive but inexact implementations. In this paper, we revisit these algorithms in a unified way. We define a class of non-negative orthogonal greedy algorithms and exhibit their structural properties. We propose a fast and exact implementation based on the active-set resolution of non-negative least-squares and exploiting warm start initializations. The algorithms are assessed in terms of accuracy and computational complexity for a sparse spike deconvolution problem. We also present an application to near-infrared spectra decomposition.
We present on O(n log n) greedy algorithm with a worst-case performance ratio greater-than-or-equal-to 4/7 for the unbounded knapsack problem, an O(n log n) greedy algorithm with a worst-case performance ratio of 2/3 ...
详细信息
We present on O(n log n) greedy algorithm with a worst-case performance ratio greater-than-or-equal-to 4/7 for the unbounded knapsack problem, an O(n log n) greedy algorithm with a worst-case performance ratio of 2/3 for the subset-sum problem, and an O(n log n) greedy algorithm with a worst-case performance ratio of 8/7 for the partition problem. These greedy algorithms, in the sense of worst-case performance, are better than other known O(n log n) greedy algorithms.
In this paper we consider the binary transfer learning problem, focusing on how to select and combine sources from a large pool to yield a good performance on a target task. Constraining our scenario to real world, we...
详细信息
In this paper we consider the binary transfer learning problem, focusing on how to select and combine sources from a large pool to yield a good performance on a target task. Constraining our scenario to real world, we do not assume the direct access to the source data, but rather we employ the source hypotheses trained from them. We propose an efficient algorithm that selects relevant source hypotheses and feature dimensions simultaneously, building on the literature on the best subset selection problem. Our algorithm achieves state-of-the-art results on three computer vision datasets, substantially outperforming both transfer learning and popular feature selection baselines in a small-sample setting. We also present a randomized variant that achieves the same results with the computational cost independent from the number of source hypotheses and feature dimensions. Also, we theoretically prove that, under reasonable assumptions on the source hypotheses, our algorithm can learn effectively from few examples. (C) 2016 Elsevier Inc. All rights reserved.
暂无评论