This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous a...
详细信息
This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. (C) 2016 Elsevier B.V. All rights reserved.
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to t...
详细信息
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Delta alpha from the boundary data. Also, the greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of compu
Motivated by applications in online labor markets, we study the problem of forming multiple teams of experts in a social network to accomplish multiple tasks that require different combinations of skills. Our goal is ...
详细信息
Motivated by applications in online labor markets, we study the problem of forming multiple teams of experts in a social network to accomplish multiple tasks that require different combinations of skills. Our goal is to maximize the total profit of tasks that are completed by these teams subject to the capacity constraints of the experts. We study both the offline and online settings of the problem. For the offline problem, we present a simple and practical algorithm that improves upon previous results in many situations. For the online problem, we design competitive deterministic and randomized online algorithms. These are complemented by some hardness results in both settings.
In the design of algorithms, the greedy paradigm provides a powerful tool for solving efficiently classical computational problems, within the framework of procedural languages. However, expressing these algorithms wi...
详细信息
In the design of algorithms, the greedy paradigm provides a powerful tool for solving efficiently classical computational problems, within the framework of procedural languages. However, expressing these algorithms within the declarative framework of logic-based languages has proven a difficult research challenge. In this paper, we extend the framework of Datalog-like languages to obtain simple declarative formulations for such problems. and propose effective implementation techniques to ensure computational complexities comparable to those of procedural formulations. These advances are achieved through the use of the choice construct, extended with preference annotations to effect the selection of alternative stable-models and nondeterministic fixpoints. We show that with suitable storage structures, the differential fixpoint computation of our programs matches the complexity of procedural algorithms in classical search and optimization problems.
Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario: Iterative Hard Thresholding (IHT...
详细信息
Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario: Iterative Hard Thresholding (IHT), Normalized IHT (NIHT), Hard Thresholding Pursuit (HTP), Normalized HTP (NHTP), and Compressive Sampling Matching Pursuit (CoSaMP). Using the asymmetric restricted isometry property (ARIP), sufficient conditions for all five algorithms establish bounds on the discrepancy between the algorithms' output and the optimal row-sparse representation. When the initial multiple measurement vectors are jointly sparse, ARIP-based guarantees for exact recovery are also established. The algorithms are then compared via the recovery phase transition framework. The strong phase transitions describing the family of Gaussian matrices which satisfy the sufficient conditions are obtained via known bounds on the ARIP constants. The algorithms' empirical weak phase transitions are compared for various numbers of multiple measurement vectors. Finally, the performance of the algorithms is compared against a known rank aware greedy algorithm, Rank Aware Simultaneous Orthogonal Matching Pursuit + MUSIC. Simultaneous recovery variants of NIHT, NHTP, and CoSaMP all outperform the rank-aware algorithm.
This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for constructio...
详细信息
This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.
We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form l(j)(u), j = 1, . . . , m, where the l(j) are known linear functionals on V. We are motivated by ...
详细信息
We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form l(j)(u), j = 1, . . . , m, where the l(j) are known linear functionals on V. We are motivated by the setting where u is a solution to a PDE with some unknown parameters, therefore lying on a certain manifold contained in V. Following the approach adopted in [Maday, Patera, Penn and Yano, Int. J. Namer. Methods Engrg., 102 (2015), pp. 933-965, Binev, Cohen, Dahmen, DeVore, Petrova, and Wojtaszczyk, SIAM J. Uncertainty Quantification, 5 (2017), pp. 1-29], the prior on the unknown function can be described in terms of its approximability by finitedimensional reduced model spaces (V-n)(n >= 1) where dim(V-n) = n. Examples of such spaces include classical approximation spaces, e.g., finite elements or trigonometric polynomials, as well as reduced basis spaces which are designed to match the solution manifold more closely. The error bounds for optimal recovery under such priors are of the form mu(V-n, W-m)epsilon(n), where s n is the accuracy of the reduced model V-n and mu(V-n, W-m) is the inverse of an inf-sup constant that describe the angle between V-n and the space W m spanned by the Riesz representers of (l(1), . . . , l(m)). This paper addresses the problem of properly selecting the measurement functionals, in order to control at best the stability constant mu(V-n, W-m), for a given reduced model space V-n. Assuming that the l(j) can be picked from a given dictionary D we introduce and analyze greedy algorithms that perform a suboptimal selection in reasonable computational time. We study the particular case of dictionaries that consist either of point value evaluations or local averages, as idealized models for sensors in physical systems. Our theoretical analysis and greedy algorithms may therefore be used in order to optimize the position of such sensors.
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorit...
详细信息
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorithm does not use flows or linear programming. Hence we provide a positive answer to a question posed by Williamson in 1998 on whether such an algorithm exists. Moreover, we show that Johnson's greedy algorithm cannot guarantee a 3/4-approximation, even if the variables are processed in a random order. Thereby we partially solve a problem posed by Chen, Friesen, and Zheng in 1999. In order to explore the limitations of the greedy paradigm, we use the model of priority algorithms of Borodin, Nielsen, and Rackoff. Since our greedy algorithm works in an online scenario where the variables arrive with their set of undecided clauses, we wonder if a better approximation ratio can be obtained by further fine-tuning its random decisions. For a particular information model we show that no priority algorithm can approximate Online MAX SAT within 3/4 + epsilon (for any epsilon > 0). We further investigate the strength of deterministic greedy algorithms that may choose the variable ordering. Here we show that no adaptive priority algorithm can achieve approximation ratio 3/4. We propose two ways in which this inapproximability result can be bypassed. First we show that if our greedy algorithm is additionally given the variable assignments of an optimal solution to the canonical LP relaxation, then we can derandomize its decisions while preserving the overall approximation guarantee. Second we give a simple, deterministic algorithm that performs an additional pass over the input. We show that this 2-pass algorithm satisfies clauses with a total weight of at least 3/4 OPTLP, where OPTLP is the objective value of the canonical linear program. Moreover, we demonstrate that our analysis is tight and detail how each pass can be implemented in linear time.
Compressed sensing (CS) is a technique which uses fewer measurements than dictated by the Nyquist sampling theorem. The traditional CS with linear measurements achieves effective recovery, but it suffers from large bi...
详细信息
Compressed sensing (CS) is a technique which uses fewer measurements than dictated by the Nyquist sampling theorem. The traditional CS with linear measurements achieves effective recovery, but it suffers from large bit consumption due to the precision required by those measurements. Then, the one-bit CS with binary measurements is proposed to save the bit budget, but it is infeasible when the energy information of signals is not available as a prior knowledge. Subsequently, the hybrid CS which combines traditional CS and one-bit CS appears, striking a balance between the pros and cons of both types of CS. Given that one-bit CS is optimal for the direction estimation of signals under noise with a fixed bit budget and that traditional CS is able to provide residue information and estimated signals, we focus on the design of greedy algorithms, which consist of the main steps of support detection and recovered signal updates, for hybrid CS in this paper. We propose two greedy algorithms for hybrid CS, with traditional CS offering signal estimates and updated residues, which help one-bit CS detect the support iteratively. Then, we provide a theoretical analysis of the error bound between the normalized original signal and the normalized estimated signal. Numerical results demonstrate the efficacy of the proposed greedy algorithms for hybrid CS in noisy environments.
We suggest a three-step strategy to find a good basis (dictionary) for nonlinear m-term approximation. The first step consists of solving an optimization problem of finding a near best basis for a given function class...
详细信息
We suggest a three-step strategy to find a good basis (dictionary) for nonlinear m-term approximation. The first step consists of solving an optimization problem of finding a near best basis for a given function class F, when we optimize over a collection D of bases (dictionaries). The second step is devoted to finding a universal basis (dictionary) D-u epsilon D for a given pair (F, D) of collections: F of function classes and D of bases (dictionaries). This means that D-u provides near optimal approximation for each class F from a collection T. The third step deals with constructing a theoretical algorithm that realizes near best m-term approximation with regard to D-u for function classes from F. In this paper we work this strategy out in the model case of anisotropic function classes and the set of orthogonal bases. The results are positive. We construct a natural tensor-product-wavelet-type basis and prove that it is universal. Moreover, we prove that a greedy algorithm realizes near best in-term approximation with regard to this basis for all anisotropic function classes.
暂无评论