This paper proposes a new probabilistic solution, framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints para...
详细信息
This paper proposes a new probabilistic solution, framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NP-hard control problems representable by means of parameter-dependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain a-priori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomial-time solution, if robustness is intended in the proposed risk-adjusted sense.
We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, d...
详细信息
We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, during the entire execution of the data propagation protocol. This property is important since it prolongs the network's lifetime by avoiding early energy depletion of sensors. We propose a new algorithm that in each step decides whether to propagate data one-hop towards the final destination (the sink), or to send data directly to the sink. This randomized choice balances the (cheap) one-hop transimssions with the direct transimissions to the sink, which are more expensive but "bypass" the sensors lying close to the sink. Note that, in most protocols, these close to the sink sensors tend to be overused and die out early. By a detailed analysis we precisely estimate the probabilities for each propagation choice in order to guarantee energy balance. The needed estimation can easily be performed by current sensors using simple to obtain information. Under some assumptions, we also derive a closed form for these probabilities. The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy-balanced, is also energy efficient.
In this work, a new algorithm for drawing a weighted random sample of size m from a population of n weighted items, where m <= n, is presented. The algorithm can generate a weighted random sample in one-pass over u...
详细信息
In this work, a new algorithm for drawing a weighted random sample of size m from a population of n weighted items, where m <= n, is presented. The algorithm can generate a weighted random sample in one-pass over unknown populations. (c) 2005 Elsevier B.V. All rights reserved.
In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. ...
详细信息
In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. This problem originates from the application in digital halftoning, with the best known approximation factor being 5.83 (Asano et al., 2004). We show that if their radii are between two positive constants, then there exists a polynomial time approximation scheme. Our techniques are based on the width-bounded geometric separator recently developed in Fu and Wang (2004), Fu (2006).
The probabilistic path planner (PPP) is a general planning scheme that yields fast robot path planners for a wide variety of problems, involving high degree of freedom articulated robots, non holonomic robots, and mul...
详细信息
ISBN:
(纸本)9728865600
The probabilistic path planner (PPP) is a general planning scheme that yields fast robot path planners for a wide variety of problems, involving high degree of freedom articulated robots, non holonomic robots, and multiple robots. This paper presents a new probabilistic approach for finding paths through narrow passages. Our probabilistic planner follows the general framework of probabilistic roadmap (PRM), but to increase sample density in difficult areas like narrow passages, we define two sampling constraints in order to get much more points than a classic PRM gets in such areas. We simulate our planner in 2D environments and the simulations results shows good performance for our planner.
In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. ...
详细信息
ISBN:
(纸本)9783540280613
In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. This problem originates from the application in digital halftoning, with the best known approximation factor being 5.83 (Asano et al., 2004). We show that if their radii are between two positive constants, then there exists a polynomial time approximation scheme. Our techniques are based on the width-bounded geometric separator recently developed in Fu and Wang (2004), Fu (2006).
Internet supercomputing is becoming a powerful tool for harnessing massive amounts of computational resources. However in typical master-worker settings the reliability of computation crucially depends on the ability ...
详细信息
ISBN:
(纸本)3540446249
Internet supercomputing is becoming a powerful tool for harnessing massive amounts of computational resources. However in typical master-worker settings the reliability of computation crucially depends on the ability of the master to depend on the computation performed by the workers. Fernandez, Georgiou, Lopez, and Santos [12,13] considered a system consisting of a master process and a collection of worker processes that can execute tasks on behalf of the master and that may act maliciously by deliberately returning fallacious results. The master decides on the correctness of the results by assigning the same task to several workers. The master is charged one work unit for each task performed by a worker. The goal is to design an algorithm that enables the master to determine the correct result with high probability, and at the least possible cost. Fernandez et al. assume that the number of faulty processes or the probability of a process acting maliciously is known to the master. In this paper this assumption is removed. In the setting with n processes and n tasks we consider two different failure models, viz., model F, where f-fraction, 0 < f < 1/2, of the workers provide faulty results with probability p, 0 < p < 1/2, but the master knows the values of f and p. For model F-a we provide an algorithm-based on the Stopping Rule Algorithm by Dagum, Karp, Luby, and Ross [10]- that can estimate f and p with (is an element of, delta)-approximation, for any 0 < delta < 1 and is an element of > 0. This algorithm runs in O(log n) time, O(log(2) n) message complexity, and O(log(2) n) task-oriented work and O(n log n) total-work complexities. We also provide a randomized algorithm for detecting the faulty processes, i.e., identifying the processes that have non-zero probability of failures in model F-a, with task-oriented work O(n), and time O(log n). A lower bound on the total-work complexity of performing n tasks correctly with high probability is shown. Finally, two rand
We consider a class of queueing networks referred to as "generalized constrained queueing networks" which form the basis of several different communication networks and information systems. These networks co...
详细信息
ISBN:
(纸本)9781424402212
We consider a class of queueing networks referred to as "generalized constrained queueing networks" which form the basis of several different communication networks and information systems. These networks consist of a collection of queues such that only certain sets of queues can be concurrently scheduled. Whenever a queue is scheduled for service, the system transmits a packet and receives a certain reward. Different rewards are obtained for transmitting packets from different queues, and furthermore, the reward obtained for serving a queue depends on the set of concurrently scheduled queues. We demonstrate that the dependence of the rewards on the schedules alter fundamental relations between performance metrics like throughput and stability. Specifically, maximizing throughput is no longer equivalent to maximizing the stability region;we therefore need to maximize one subject to certain constraints on the other. Since stability is critical for bounding packet delays and buffer overflow, we focus on maximizing throughput subject to stabilizing the system. We design provably optimal scheduling strategies that attain this goal by scheduling the queues for service based on the queue lengths and the rewards provided by different selections. The proposed scheduling strategies are however computationally complex. We subsequently develop techniques to reduce the complexity and yet attain the same throughput and stability region. We demonstrate that our framework is general enough to accommodate random rewards and random scheduling constraints.
We study the following optimization problem: the input is a multigraph G = (V, E) and an integer parameter g. A feasible solution consists of a (not necessarily proper) coloring of E with colors 1, 2,..., g. Denote by...
详细信息
ISBN:
(纸本)9783540499947
We study the following optimization problem: the input is a multigraph G = (V, E) and an integer parameter g. A feasible solution consists of a (not necessarily proper) coloring of E with colors 1, 2,..., g. Denote by d(v, i) the number of edges colored i incident to v. The objective is to minimize Sigma(v is an element of V) max(i) d(v, i), which roughly corresponds to the "imbalance" of the edge coloring. This problem was proposed by Berry and Modiano (INFOCOM 2004), with the goal of optimizing the deployment of tunable ports in optical networks. Following them we call the optimization problem MTPS - Minimum Tunable Port with Symmetric Assignments. Among other results, they give a reduction from Edge Coloring showing that MTPS is NP-Hard and then give a 2-approximation algorithm. We give a (3/2)-approximation algorithm. Key to this problem is the following question: given a multigraph G = (V, E) of maximum degree g, what fraction of the vertices can be properly edge-colored in a coloring with g colors, where a vertex is properly edge-colored if the edges incident to it have different colors? Our main lemma states that there is such a coloring with half of the vertices properly edge-colored. For g : 4, two thirds of vertices can be made properly edge-colored. Our algorithm is based on g Maximum Matching computations (total running time O(gm root n+m/g)) and a local optimization procedure, which by itself gives a 2-approximation. An interesting analysis gives an expected O((gn + m) log (gn + m)) running time for the local optimization procedure.
We reduce the problem of proving the convergence of a randomized self-stabilizing algorithm under k-bounded policies to the convergence of the same algorithm under a specific policy. As a consequence, all k-bounded sc...
详细信息
ISBN:
(纸本)3540490183
We reduce the problem of proving the convergence of a randomized self-stabilizing algorithm under k-bounded policies to the convergence of the same algorithm under a specific policy. As a consequence, all k-bounded schedules are equivalent: a given algorithm is self-stabilizing under one of them if and only if it is self-stabilizing under any of them.
暂无评论