Parameter tuning is an important but often overlooked step in signal recovery problems. For instance, the regularization parameter in compressed sensing dictates the sparsity of the approximate signal reconstruction. ...
详细信息
ISBN:
(数字)9781510629707
ISBN:
(纸本)9781510629707
Parameter tuning is an important but often overlooked step in signal recovery problems. For instance, the regularization parameter in compressed sensing dictates the sparsity of the approximate signal reconstruction. More recently, there has been evidence that non-convex l(p) quasi-norm minimization, where 0 < p < 1, leads to an improvement in reconstruction over existing models that use convex regularization. However, these methods rely on good estimates of the value of not only p (the choice of norm) but also on the value of the penalty regularization parameter. This paper describes a method for choosing suitable *** method involves creating a score to determine the effectiveness of the choice of parameters by partially reconstructing the signal. We then efficiently search through different combinations of parameters using a patternsearch approach that exploits parallelism and asynchronicity to find the pair with the optimal score. We demonstrate the efficiency and accuracy of the proposed method through numerical experiments.
We describe an asynchronousparallel derivative-free algorithm for linearly constrained optimization. Generating set search (GSS) is the basis of our method. At each iteration, a GSS algorithm computes a set of search...
详细信息
We describe an asynchronousparallel derivative-free algorithm for linearly constrained optimization. Generating set search (GSS) is the basis of our method. At each iteration, a GSS algorithm computes a set of search directions and corresponding trial points and then evaluates the objective function value at each trial point. asynchronous versions of the algorithm have been developed in the unconstrained and bound-constrained cases which allow the iterations to continue (and new trial points to be generated and evaluated) as soon as any other trial point completes. This enables better utilization of parallel resources and a reduction in overall run time, especially for problems where the objective function takes minutes or hours to compute. For linearly constrained GSS, the convergence theory requires that the set of search directions conforms to the nearby boundary. This creates an immediate obstacle for asynchronous methods where the definition of nearby is not well defined. In this paper, we develop an asynchronous linearly constrained GSS method that overcomes this difficulty and maintains the original convergence theory. We describe our implementation in detail, including how to avoid function evaluations by caching function values and using approximate lookups. We test our implementation on every CUTEr test problem with general linear constraints and up to 1000 variables. Without tuning to individual problems, our implementation was able to solve 95% of the test problems with 10 or fewer variables, 73% of the problems with 11-100 variables, and nearly half of the problems with 100-1000 variables. To the best of our knowledge, these are the best results that have ever been achieved with a derivative-free method for linearly constrained optimization. Our asynchronousparallel implementation is freely available as part of the APPSPACK software.
暂无评论