Grasp and manipulation planning of slippery objects often relies on the "form closure" grasp, which is stable regardless of the external force applied to the object. Despite its importance, an efficient quan...
详细信息
Grasp and manipulation planning of slippery objects often relies on the "form closure" grasp, which is stable regardless of the external force applied to the object. Despite its importance, an efficient quantitative test for form closure valid for any number of contact points has not been available. The primary contribution of this paper is the introduction of such a test formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. When the grasp does not have form closure, manipulation planning requires a means to predict the object's stability and instantaneous velocity, given the joint velocities of the hand. The "classical" approach to computing these quantities is to solve the systems of kinematic inequalities corresponding to all possible combinations of separating or sliding at the contacts. All combinations resulting in the interpenetration of bodies or the infeasibility of the equilibrium equations are rejected. The remaining combination (sometimes there am more than one) is consistent with all the constraints and is used to compute the velocity of the manipulated object and the contact forces, which indicate whether or not the object is stable. Our secondary contribution is the formulation of a linear program whose solution yields the same information as the classical approach. The benefit of this formulation is that explicit testing of all possible combinations of contact interactions is usually avoided by the algorithm used to solve the linear program.
We present convergence conditions for a generic primal-dual interior-point algorithm with multiple corrector directions. The corrector directions can be generated by any approach. The search direction is obtained by c...
详细信息
We present convergence conditions for a generic primal-dual interior-point algorithm with multiple corrector directions. The corrector directions can be generated by any approach. The search direction is obtained by combining predictor and corrector directions through a small linear program. We also propose a new approach to generate corrector directions. This approach generates directions using information from an appropriately defined Krylov subspace. We propose efficient implementation strategies for our approach that follow the analysis of this paper. Numerical experiments illustrating the features of the proposed approach and its practical usefulness are reported.
We give in this paper a sufficient condition under which the least fixpoint of the equation X = a + f(X)X equals the least fixpoint of the equation X = a + f(a)X. We then apply that condition to recursive logic progra...
详细信息
We give in this paper a sufficient condition under which the least fixpoint of the equation X = a + f(X)X equals the least fixpoint of the equation X = a + f(a)X. We then apply that condition to recursive logic programs containing chain rules: we translate it into a sufficient condition under which a recursive logic program containing n greater-than-or-equal-to 2 recursive calls in the bodies of the rules is equivalent to a linear program containing at most one recursive call in the bodies of the rules. We conclude with a discussion comparing our condition with the other approaches to linearization studied in the literature.
The distinguished econometrician Ragnar Frisch (1895-1973) also played an important role in optimization theory. In fact, he was a pioneer of interior-point methods. This note reconsiders his contribution, relating it...
详细信息
The distinguished econometrician Ragnar Frisch (1895-1973) also played an important role in optimization theory. In fact, he was a pioneer of interior-point methods. This note reconsiders his contribution, relating it to history and modern developments.
In wireless networks, power allocation is an effective technique for prolonging network lifetime, achieving better quality-of-service (QoS), and reducing network interference. However, these benefits depend on knowled...
详细信息
In wireless networks, power allocation is an effective technique for prolonging network lifetime, achieving better quality-of-service (QoS), and reducing network interference. However, these benefits depend on knowledge of the channel state information (CSI), which is hardly perfect. Therefore, robust algorithms that take into account such CSI uncertainties play an important role in the design of practical systems. In this paper, we develop relay power allocation algorithms for noncoherent and coherent amplify-and-forward (AF) relay networks. The goal is to minimize the total relay transmission power under individual relay power constraints, while satisfying a QoS requirement. To make our algorithms practical and attractive, our power update rate is designed to follow large-scale fading, i.e., in the order of seconds. We show that, in the presence of perfect global CSI, our power optimization problems for noncoherent and coherent AF relay networks can be formulated as a linear program and a second-order cone program (SOCP), respectively. We then introduce robust optimization methodology that accounts for uncertainties in the global CSI. In the presence of ellipsoidal uncertainty sets, the robust counterparts of our optimization problems for noncoherent and coherent AF relay networks are shown to be an SOCP and a semi-definite program, respectively. Our results reveal that ignoring uncertainties associated with global CSI often leads to poor performance. We verify that our proposed algorithms can provide significant power savings over a naive scheme that employs maximum transmission power at each relay node. This work highlights the importance of robust algorithms with practical power update rates in realistic wireless networks.
Peer-to-peer (P2P) systems provide a scalable way to stream content to multiple receivers over the Internet. The maximum rate achievable by all receivers is the capacity of a P2P streaming session. We provide a taxono...
详细信息
Peer-to-peer (P2P) systems provide a scalable way to stream content to multiple receivers over the Internet. The maximum rate achievable by all receivers is the capacity of a P2P streaming session. We provide a taxonomy of sixteen problem formulations, depending on whether there is a single P2P session or there are multiple concurrent sessions, whether the given topology is a full mesh graph or an arbitrary graph, whether the number of peers a node can have is bounded or not, and whether there are nonreceiver relay nodes or not. In each formulation, computing P2P streaming capacity requires the computation of an optimal set of multicast trees, with an exponential complexity, except in three simplest formulations that have been recently solved with polynomial time algorithms. These solutions, however, do not extend to the other more general formulations. In this paper, we develop a family of constructive, polynomial-time algorithms that can compute P2P streaming capacity and the associated multicast trees, arbitrarily accurately for seven formulations, to a factor of 4-approximation for two formulations, and to a factor of log of the number of receivers for two formulations. The optimization problem is reformulated in each case so as to convert the combinatorial problem into a linear program with an exponential number of variables. The linear program is then solved using a primal-dual approach. The algorithms combine an outer loop of primal-dual update with an inner loop of smallest price tree construction, driven by the update of dual variables in the outer loop. We show that when the construction of smallest price tree can be carried out arbitrarily accurately in polynomial time, so can the computation of P2P streaming capacity. We also develop several efficient algorithms for smallest price tree construction. Using the developed algorithms, we investigate the impact of several factors on P2P streaming capacity using topologies derived from statistics of uplink cap
We study integrated prefetching and caching in single and parallel disk systems. In the first part of the paper, we investigate approximation algorithms for the single disk problem. There exist two very popular approx...
详细信息
We study integrated prefetching and caching in single and parallel disk systems. In the first part of the paper, we investigate approximation algorithms for the single disk problem. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time. We give a refined analysis of the Aggressive algorithm, improving the original analysis by Cao et al. We prove that our new bound is tight. Additionally, we present a new family of prefetching and caching strategies and give algorithms that perform better than Aggressive and Conservative. In the second part of the paper, we investigate the problem of minimizing stall time in parallel disk systems. We present a polynomial time algorithm for computing a prefetching/caching schedule whose stall time is bounded by that of an optimal solution. The schedule uses at most 2(D - 1) extra memory locations in cache. This is the first polynomial time algorithm that, using a small amount of extra resources, computes schedules whose stall times are bounded by that of optimal schedules not using extra resources. Our algorithm is based on the linear programming approach of [Journal of the ACM 47 (2000) 969]. However, in order to achieve minimum stall times, we introduce the new concept of synchronized schedules in which fetches on the D disks are performed completely in parallel. (c) 2005 Elsevier Inc. All rights reserved.
Ln many decision-making situations, decision makers (DMs) have difficulty in specifying their perceived state probability values or even probability value ranges, However, they may find it easier to tell how much more...
详细信息
Ln many decision-making situations, decision makers (DMs) have difficulty in specifying their perceived state probability values or even probability value ranges, However, they may find it easier to tell how much more likely is the occurrence of a given state when compared with other states. An approach is proposed to identify the efficient strategies of a decision-making situation where the DMs involved declare their perceived relative likelihood of the occurrence of the states by pair-wise comparisons. The pair-wise comparisons of all the states are used to construct a judgment matrix, which is transformed into a probability matrix, The columns of the transformed matrix delineate a convex cone of the state probabilities. Next, an efficiency linear program (ELP) is formulated for each available strategy, whose optimal solution determines whether or not that strategy is efficient within the probability region defined by the cone. Only an efficient strategy can be optimum for a given set of state probability values. Inefficient strategies are never used irrespective of state probability values. The application of the approach is demonstrated using examples where DMs offer differing views on the occurrence of the states. (C) 1999 Elsevier Science B.V. All rights reserved.
Cloud computing is an emerging paradigm that provides hardware, platform and software resources as services based on pay-as-you-go model. It is being increasingly used for hosting and executing service-based business ...
详细信息
Cloud computing is an emerging paradigm that provides hardware, platform and software resources as services based on pay-as-you-go model. It is being increasingly used for hosting and executing service-based business processes. However, business processes are subject to dynamic evolution during their life-cycle due to the highly dynamic evolution of the cloud environment. Therefore, to efficiently manage them according to the autonomic computing paradigm, service-based business processes can be associated with autonomic managers. Autonomic managers monitor these processes, analyze monitoring data, plan configuration actions, and execute these actions on these processes. The main objective of cloud computing is to improve the performance level while minimizing operating costs. Thus, due to the diversity of business processes requirements and the heterogeneity of cloud resources, discovering the optimal management cost of a business process in the cloud becomes a highly challenging problem. For this purpose, we propose an approach based on an Integer linear program to find out the optimal allocation of cloud resources that meet customers' requirements and resources' constraints. Besides, to validate our approach under realistic conditions and inputs, we propose an extension of the CloudSim simulator to analyze the cloud resources consumed by an autonomic business process. Experiments conducted on two real datasets highlight the effectiveness and performance of our approach.
Given a closed, convex set X subset of or equal to R-n, containing the origin, we consider the problem (P): max(c(T)x: x is an element of X). We show that, for a fixed dimension, n, and fixed epsilon, 0 -approximatio...
详细信息
Given a closed, convex set X subset of or equal to R-n, containing the origin, we consider the problem (P): max(c(T)x: x is an element of X). We show that, for a fixed dimension, n, and fixed epsilon, 0 < < 1, the existence of a combinatorial, strongly polynomial -approximation separation algorithm for the set X is equivalent to the existence of a combinatorial, strongly polynomial epsilon -approximation optimization algorithm for the problem (P).
暂无评论