Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to t...
详细信息
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Delta alpha from the boundary data. Also, the greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of compu
We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form l(j)(u), j = 1, . . . , m, where the l(j) are known linear functionals on V. We are motivated by ...
详细信息
We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form l(j)(u), j = 1, . . . , m, where the l(j) are known linear functionals on V. We are motivated by the setting where u is a solution to a PDE with some unknown parameters, therefore lying on a certain manifold contained in V. Following the approach adopted in [Maday, Patera, Penn and Yano, Int. J. Namer. Methods Engrg., 102 (2015), pp. 933-965, Binev, Cohen, Dahmen, DeVore, Petrova, and Wojtaszczyk, SIAM J. Uncertainty Quantification, 5 (2017), pp. 1-29], the prior on the unknown function can be described in terms of its approximability by finitedimensional reduced model spaces (V-n)(n >= 1) where dim(V-n) = n. Examples of such spaces include classical approximation spaces, e.g., finite elements or trigonometric polynomials, as well as reduced basis spaces which are designed to match the solution manifold more closely. The error bounds for optimal recovery under such priors are of the form mu(V-n, W-m)epsilon(n), where s n is the accuracy of the reduced model V-n and mu(V-n, W-m) is the inverse of an inf-sup constant that describe the angle between V-n and the space W m spanned by the Riesz representers of (l(1), . . . , l(m)). This paper addresses the problem of properly selecting the measurement functionals, in order to control at best the stability constant mu(V-n, W-m), for a given reduced model space V-n. Assuming that the l(j) can be picked from a given dictionary D we introduce and analyze greedy algorithms that perform a suboptimal selection in reasonable computational time. We study the particular case of dictionaries that consist either of point value evaluations or local averages, as idealized models for sensors in physical systems. Our theoretical analysis and greedy algorithms may therefore be used in order to optimize the position of such sensors.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean o...
详细信息
ISBN:
(纸本)9781450356497
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
This paper presents the evolution experienced by GreedEx, an interactive application to learn greedy algorithms. It shows the four different versions currently available. A first original version for computers, two ve...
详细信息
This paper presents the evolution experienced by GreedEx, an interactive application to learn greedy algorithms. It shows the four different versions currently available. A first original version for computers, two versions for iPad, and another version for smartphones. Besides, it describes the evaluation performed for each version, which has served so as to justify such evolution.
In this paper, we consider sensor selection for (binary) hypothesis testing. Given a pair of hypotheses and a set of candidate sensors to measure (detect) the signals generated under the hypotheses, we aim to select a...
详细信息
ISBN:
(数字)9781728113982
ISBN:
(纸本)9781728113999
In this paper, we consider sensor selection for (binary) hypothesis testing. Given a pair of hypotheses and a set of candidate sensors to measure (detect) the signals generated under the hypotheses, we aim to select a subset of the sensors (under a budget constraint) that yields the optimal signal detection performance. In particular, we consider the Neyman-Pearson detector based on measurements of the chosen sensors. The goal is to minimize (resp., maximize) the miss probability (resp., detection probability) of the Neyman-Pearson detector, while satisfying the budget constraint. We first show that the sensor selection for the Neyman-Pearson detector problem is NP-hard. We then characterize the performance of greedy algorithms for solving the sensor selection problem when we consider a surrogate to the miss probability as an optimization metric, which is based on the Kullback-Leibler distance. By leveraging the notion of submodularity ratio, we provide a bound on the performance of greedy algorithms.
In this paper, we improve greedy algorithms to recover sparse signals with complex Gaussian distributed non-zero elements, when the probability of sparsity pattern is known a priori. By exploiting this prior probabili...
详细信息
ISBN:
(纸本)9781538670484
In this paper, we improve greedy algorithms to recover sparse signals with complex Gaussian distributed non-zero elements, when the probability of sparsity pattern is known a priori. By exploiting this prior probability, we derive a correction function that minimizes the probability of incorrect selection of a support index at each iteration of the orthogonal matching pursuit (OMP). In particular, we employ the order statistics of exponential distribution to create the correction function. Simulation results demonstrate that the correction function significantly improves the recovery performance of OMP and subspace pursuit (SP) for random Gaussian and Bernoulli measurement matrices.
The paper examines four weak relaxed greedy algorithms for finding approximate sparse solutions of convex optimization problems in a Banach space. First, we present a review of primal results on the convergence rate o...
详细信息
ISBN:
(纸本)9783319729268;9783319729251
The paper examines four weak relaxed greedy algorithms for finding approximate sparse solutions of convex optimization problems in a Banach space. First, we present a review of primal results on the convergence rate of the algorithms based on the geometric properties of the objective function. Then, using the ideas of [16], we define the duality gap and prove that the duality gap is a certificate for the current approximation to the optimal solution. Finally, we find estimates of the dependence of the duality gap values on the number of iterations for weak greedy algorithms.
greedy algorithms provide a fast and often also effective solution to many combinatorial optimization problems. However, it is well known that they sometimes lead to low quality solutions on certain instances. In this...
详细信息
ISBN:
(纸本)9781450356183
greedy algorithms provide a fast and often also effective solution to many combinatorial optimization problems. However, it is well known that they sometimes lead to low quality solutions on certain instances. In this paper, we explore the use of randomness in greedy algorithms for the minimum vertex cover and dominating set problem and compare the resulting performance against their deterministic counterpart. Our algorithms are based on a parameter. which allows to explore the spectrum between uniform and deterministic greedy selection in the steps of the algorithm and our theoretical and experimental investigations point out the benefits of incorporating randomness into greedy algorithms for the two considered combinatorial optimization problems.
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorit...
详细信息
We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3/4-approximation in expectation. In contrast to previously known 3/4-approximation algorithms, our algorithm does not use flows or linear programming. Hence we provide a positive answer to a question posed by Williamson in 1998 on whether such an algorithm exists. Moreover, we show that Johnson's greedy algorithm cannot guarantee a 3/4-approximation, even if the variables are processed in a random order. Thereby we partially solve a problem posed by Chen, Friesen, and Zheng in 1999. In order to explore the limitations of the greedy paradigm, we use the model of priority algorithms of Borodin, Nielsen, and Rackoff. Since our greedy algorithm works in an online scenario where the variables arrive with their set of undecided clauses, we wonder if a better approximation ratio can be obtained by further fine-tuning its random decisions. For a particular information model we show that no priority algorithm can approximate Online MAX SAT within 3/4 + epsilon (for any epsilon > 0). We further investigate the strength of deterministic greedy algorithms that may choose the variable ordering. Here we show that no adaptive priority algorithm can achieve approximation ratio 3/4. We propose two ways in which this inapproximability result can be bypassed. First we show that if our greedy algorithm is additionally given the variable assignments of an optimal solution to the canonical LP relaxation, then we can derandomize its decisions while preserving the overall approximation guarantee. Second we give a simple, deterministic algorithm that performs an additional pass over the input. We show that this 2-pass algorithm satisfies clauses with a total weight of at least 3/4 OPTLP, where OPTLP is the objective value of the canonical linear program. Moreover, we demonstrate that our analysis is tight and detail how each pass can be implemented in linear time.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean o...
详细信息
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special ***, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
暂无评论