Preemptive scheduling problems on parallel machines are classic problems. Given the goal of minimizing the makespan, they are polynomially solvable even for the most general model of unrelated machines. In these probl...
详细信息
Preemptive scheduling problems on parallel machines are classic problems. Given the goal of minimizing the makespan, they are polynomially solvable even for the most general model of unrelated machines. In these problems, a set of jobs is to be assigned to run on a set of m machines. A job can be split into parts arbitrarily and these parts are to be assigned to time slots on the machines without parallelism, that is, for every job, at most one of its parts can be processed at each time. Motivated by sensitivity analysis and online algorithms, we investigate the problem of designing robust algorithms for constructing preemptive schedules. robust algorithms receive one piece of input at a time. They may change a small portion of the solution as an additional part of the input is revealed. The capacity of change is based on the size of the new piece of input. For scheduling problems, the supremum ratio between the total size of the jobs (or parts of jobs) which may be re-scheduled upon the arrival of a new job j, and the size of j, is called migration factor. We design a strongly optimal algorithm with the migration factor for identical machines. Strongly optimal algorithms avoid idle time and create solutions where the (non-increasingly) sorted vector of completion times of the machines is lexicographically minimal. In the case of identical machines this results not only in makespan minimization, but the created solution is also optimal with respect to any a"" (p) norm (for p > 1). We show that an algorithm of a smaller migration factor cannot be optimal with respect to makespan or any other a"" (p) norm, thus the result is best possible in this sense as well. We further show that neither uniformly related machines nor identical machines with restricted assignment admit an optimal algorithm with a constant migration factor. This lower bound holds both for makespan minimization and for any a"" (p) norm. Finally, we analyze the case of two machines and show that in thi
Given a set S of n points in the plane, an E-strongly convex delta-hull of S is defined as a convex polygon P with the vertices taken from S such that no point of S lies farther than 6 outside P and such that even if ...
详细信息
Given a set S of n points in the plane, an E-strongly convex delta-hull of S is defined as a convex polygon P with the vertices taken from S such that no point of S lies farther than 6 outside P and such that even if the vertices of P are perturbed by as much as e, P remains convex. This paper presents the first parallel robust method for this generalized convex hull problem (note that the convex hull of S is the 0-strongly convex 0-hull of S). We show that an e-strongly convex O(epsilon + beta)-hull of S can be constructed in O(log(3) n) time using n processors with imprecise computations, where beta is the error unit of primitive operations. This result also implies an improved sequential algorithm. Our algorithm consists of two parts: (1) computing a convex O(epsilon + beta)-hull of n points, in O(log(3) n) time using n processors, and (2) constructing an E-strongly convex O(epsilon + beta)-hull of a convex polygon with n vertices, in O(log(2) n) time with n processors. We also find an approximate bridge of two sets with n points each, in O(log(2) n) time using n processors, which we use as a subroutine. All these algorithms are fundamental and have their own applications. The parallel computational model in this paper is the EREW PRAM. (C) 2002 Elsevier Science B.V. All rights reserved.
We discuss the complexity of robust symbolic algorithms solving a significant class of zero-dimensional square polynomial systems with rational coefficients over the complex numbers, called generalized Pham systems, w...
详细信息
We discuss the complexity of robust symbolic algorithms solving a significant class of zero-dimensional square polynomial systems with rational coefficients over the complex numbers, called generalized Pham systems, which represent the class of zero-dimensional homogeneous complete-intersection systems with "no points at infinity". Our notion of robustness models the behavior of all known universal methods for solving (parametric) polynomial systems avoiding unnecessary branchings and allowing the solution of certain limit problems. We first show that any robust algorithm solving generalized Pham systems has complexity at least polynomial in the B,zout number of the system under consideration. Then we exhibit a robust probabilistic algorithm which solves generalized Pham systems with quadratic complexity in the B,zout number of the input system. The algorithm consists in a series of homotopies deforming the input system into a system which is "easy to solve", together with a "projection algorithm" which allows to move the solutions of the known instance to the solutions of an arbitrary instance along the parameter space.
In the online version of the classic k-means clustering problem, the points of a dataset u(1), u(2),... arrive one after another in an arbitrary order. When the algorithm sees a point, it should either add it to the s...
详细信息
In the online version of the classic k-means clustering problem, the points of a dataset u(1), u(2),... arrive one after another in an arbitrary order. When the algorithm sees a point, it should either add it to the set of centers, or let go of the point. Once added, a center cannot be removed. The goal is to end up with set of roughly k centers, while competing in k-means objective value with the best set of k centers in hindsight. Online versions of k-means and other clustering problem have received significant attention in the literature. The key idea in many algorithms is that of adaptive sampling: when a new point arrives, it is added to the set of centers with a probability that depends on the distance to the centers chosen so far. Our contributions are as follows: 1. We give a modified adaptive sampling procedure that obtains a better approximation ratio (improving it from logarithmic to constant). 2. Our main result is to show how to perform adaptive sampling when data has outliers (>> k points that are potentially arbitrarily far from the actual data, thus rendering distance-based sampling prone to picking the outliers). 3. We also discuss lower bounds for k-means clustering in an online setting.
Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the un...
详细信息
Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them.
This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and d
Preemptive scheduling problems of minimizing the makespan on parallel machines are basic problems. Motivated by sensitivity analysis and online algorithms, we investigate the problem of designing robust algorithms tha...
详细信息
Preemptive scheduling problems of minimizing the makespan on parallel machines are basic problems. Motivated by sensitivity analysis and online algorithms, we investigate the problem of designing robust algorithms that are faced with the input one job at a time, but unlike online algorithms they are allowed to modify a small portion of the solution whenever a new job is released. For the settings of uniformly related machines, it is known that such algorithms cannot maintain an optimal solution (for general input sequences). Thus, we consider the semi-online case where the job sequence has non-increasing job sizes and design an optimal algorithm with migration factor 1. Previously such result was known only for the case of identical machines where it holds for an arbitrary input sequence. (C) 2021 Elsevier B.V. All rights reserved.
In this paper, we investigate the robust estimation of the principal eigenvectors of multivariate observations in an adaptive scheme. Indeed, this problem known as PCA (Principal Component Analysis) has already many e...
详细信息
In this paper, we investigate the robust estimation of the principal eigenvectors of multivariate observations in an adaptive scheme. Indeed, this problem known as PCA (Principal Component Analysis) has already many efficient solutions in batch or non-impulsive noise environment. Our objective is to develop fast adaptive methods for the PCA when the observed data is corrupted by impulsive noise or outliers or in the case of missing data. Hence, different low complexity algorithms are introduced for the estimation and tracking of the desired principal components in the three previous scenarios. Their effectiveness is illustrated via simulation experiments and comparative study. (C) 2022 Elsevier Inc. All rights reserved.
In ordinal scheduling problems, jobs are presented sorted by non-increasing sizes. However, it is not known in advance how many jobs of positive sizes will be presented, and what their exact sizes will be. This inform...
详细信息
In ordinal scheduling problems, jobs are presented sorted by non-increasing sizes. However, it is not known in advance how many jobs of positive sizes will be presented, and what their exact sizes will be. This information is revealed only when the algorithm terminates. We analyze several variants for this problem with different objectives. The main studied model is where the number of machines m >= 2 is also not known in advance, and only an upper bound M >= m is given. Jobs have to be partitioned into M inseparable groups called bags, which will be combined by the algorithm afterwards. The bags are to be merged one by one, where merging is applied until the actual number of machines m is reached, and the output consists of the m bags. We analyze this model with respect to maximization of the minimum load. We also discuss a variant where the merging process is based on the knowledge of m but not on job sizes, and we study additional objectives including makes- pan minimization.
Steering angle errors are known to be a major cause of performance degradation of adaptive beamformers, especially when the signal of interest is present in the measurements. Diagonal loading is one of the most widely...
详细信息
Steering angle errors are known to be a major cause of performance degradation of adaptive beamformers, especially when the signal of interest is present in the measurements. Diagonal loading is one of the most widely used and effective methods to improve robustness of adaptive beamformers. This paper presents two robust beamforming algorithms using efficient realisation techniques. The techniques do not use a pre-steered array and allow an array designer to specify the frequency response of the processor in the look direction and steered the main beam of specified frequency response in an arbitrary direction. The proposed techniques are computationally efficient. The techniques used in the paper extend two previously published techniques used for narrow band beamforming to broadband beamforming. Simulation results exhibit enhanced robustness performance of the broadband beamformer.
The efficient parallel algorithms proposed for many fundamental problems, such as list ranking, integer sorting and computing preorder numberings on trees, are very sensitive to processor failures. The requirement of ...
详细信息
The efficient parallel algorithms proposed for many fundamental problems, such as list ranking, integer sorting and computing preorder numberings on trees, are very sensitive to processor failures. The requirement of efficiency (commonly formalized using Parallel-time x Processors as a cost measure) has led to the design of highly tuned PRAM algorithms which, given the additional constraint of simple processor failures, unfortunately become inefficient or even incorrect. We propose a new notion of robustness, that combines efficiency with fault tolerance. For the common case of fail-stop errors, we develop a general and easy to implement technique to make robust many efficient parallel algorithms, e.g., algorithms for all the problems listed above. More specifically, for any dynamic pattern of fail-stop errors on a CRCW PRAM with at least one surviving processor, our method increases the original algorithm cost by at most a log2 multiplicative factor. Our technique is based on a robust solution of the problem of Write-All, i.e., using P processors, write 1's in all locations of an N-sized array. In addition we show that at least a log/log log multiplicative overhead will be incurred for certain patterns of failures by any algorithm that implements robust solutions to Write-All with P = N. However, by exploiting parallel slackness, we obtain an optimal cost algorithm when P less-than-or-equal-to N/log2 N - log N log log N.
暂无评论