Since diffusion processes arise in so many different fields, efficient technics for the simulation of sample paths, like discretization schemes, represent crucial tools in applied probability. Such methods permit to o...
详细信息
Since diffusion processes arise in so many different fields, efficient technics for the simulation of sample paths, like discretization schemes, represent crucial tools in applied probability. Such methods permit to obtain approximations of the first-passage times as a by-product. For efficiency reasons, it is particularly challenging to simulate directly this hitting time by avoiding to construct the whole paths. In the Brownian case, the distribution of the first-passage time is explicitly known and can be easily used for simulation purposes. The authors introduce a new rejection sampling algorithm which permits to perform an exact simulation of the first-passage time for general one-dimensional diffusion processes. The efficiency of the method, which is essentially based on Girsanov's transformation, is described through theoretical results and numerical examples.
Every pair of points lying on a polygonal path P in the plane has a detour associated with it, which is the ratio between their distance along the path and their Euclidean distance. Given a set S of points along the p...
详细信息
Every pair of points lying on a polygonal path P in the plane has a detour associated with it, which is the ratio between their distance along the path and their Euclidean distance. Given a set S of points along the path, this information can be encoded in a weighted complete graph on S. Among all spanning trees on this graph, a bottleneck spanning tree is one whose maximum edge weight is minimum. We refer to such a tree as a bottleneck detour tree of S. In other words, a bottleneck detour tree of S is a spanning tree in which the maximum detour (with respect to the original path) between pairs of adjacent points is minimum. We show how to find a bottleneck detour tree in expected O (n log(3) n + m) time, where P consists of m edges and vertical bar S vertical bar = n. (C) 2019 Elsevier B.V. All rights reserved.
In the Steiner point removal problem, we are given a weighted graph G = (V, E) and a set of terminals K subset of V of size k. The objective is to find a minor M of G with only the terminals as its vertex set, such th...
详细信息
In the Steiner point removal problem, we are given a weighted graph G = (V, E) and a set of terminals K subset of V of size k. The objective is to find a minor M of G with only the terminals as its vertex set, such that distances between the terminals will be preserved up to a small multiplicative distortion. Kamma, Krauthgamer, and Nguyen [SIAM J. Comput., 44 (2015), pp. 975-995] devised a ball-growing algorithm with exponential distributions to show that the distortion is at most O(log(5) k). Cheung [Proceedings of the 29th Annual ACM/SIAM Symposium on Discrete algorithms, 2018, pp. 1353-1360] improved the analysis of the same algorithm, bounding the distortion by O(log(2) k). We devise a novel and simpler algorithm (called the Relaxed-Voronoi algorithm) which incurs distortion O(log k). This algorithm can be implemented in almost linear time (O(vertical bar E vertical bar log vertical bar V vertical bar)).
Given an undirected graph, a minimum cut cover is a collection of cuts covering the whole set of edges and having minimum cardinality. This paper is dedicated to the fractional version of this problem where a fraction...
详细信息
Given an undirected graph, a minimum cut cover is a collection of cuts covering the whole set of edges and having minimum cardinality. This paper is dedicated to the fractional version of this problem where a fractional weight is computed for each cut such that, for each edge, the sum of the weights of all cuts containing it is no less than 1, while the sum of all weights is minimized. The fractional cover is computed for different graph classes among which are the weakly bipartite graphs. Efficient algorithms are described to compute lower and upper bounds with worst-case performance guarantees. A general randomized approach is also presented giving new insights into Goemans and Williamson's algorithm for the maximum cut problem. Some numerical experiments are included to assess the quality of bounds. (C) 2019 Elsevier B.V. All rights reserved.
We formulate and study a fundamental search and detection problem, Schedule Optimization, motivated by a variety of real-world applications, ranging from monitoring content changes on the web, social networks, and use...
详细信息
We formulate and study a fundamental search and detection problem, Schedule Optimization, motivated by a variety of real-world applications, ranging from monitoring content changes on the web, social networks, and user activities to detecting failure on large systems with many individual machines. We consider a large system consists of many nodes, where each node has its own rate of generating new events, or items. A monitoring application can probe a small number of nodes at each step, and our goal is to compute a probing schedule that minimizes the expected number of undiscovered items at the system, or equivalently, minimizes the expected time to discover a new item in the system. We study the Schedule Optimization problem both for deterministic and randomized memoryless algorithms. We provide lower bounds on the cost of an optimal schedule and construct close to optimal schedules with rigorous mathematical guarantees. (C) 2016 Elsevier B.V. All rights reserved.
This letter presents a novel algorithm for content placement in the small base stations (SBSs) caches in a heterogeneous wireless network. The problem of maximizing the average rate of cache hit in a heterogeneous wir...
详细信息
This letter presents a novel algorithm for content placement in the small base stations (SBSs) caches in a heterogeneous wireless network. The problem of maximizing the average rate of cache hit in a heterogeneous wireless network for a given probability distribution of content popularity and a given network topology, under the constraint of cache size at each SBS is proposed. The optimal cache placement algorithm turns out to be NP-Hard, and hence, a novel approximate solution is presented. Further, the theoretical guarantees on the performance of the proposed algorithm is also provided. Finally, simulation results demonstrate that the proposed edge caching strategy performs better than the conventional greedy caching policy, least recently used, and least frequently used algorithms.
Modern software systems often consist of many different components, each with a number of options. Although unit tests may reveal faulty options for individual components, functionally correct components may interact ...
详细信息
Modern software systems often consist of many different components, each with a number of options. Although unit tests may reveal faulty options for individual components, functionally correct components may interact in unforeseen ways to cause a fault. Covering arrays are used to test for interactions among components systematically. A two-stage framework, providing a number of concrete algorithms, is developed for the efficient construction of covering arrays. In the first stage, a time and memory efficient randomized algorithm covers most of the interactions. In the second stage, a more sophisticated search covers the remainder in relatively few tests. In this way, the storage limitations of the sophisticated search algorithms are avoided;hence, the range of the number of components for which the algorithm can be applied is extended, without increasing the number of tests. Many of the framework instantiations can be tuned to optimize a memory-quality trade-off, so that fewer tests can be achieved using more memory.
Radio frequency identification (RFID) technology has rich applications in cyber-physical systems, such as warehouse management and supply chain control. Often in practice, tags are attached to objects belonging to dif...
详细信息
Radio frequency identification (RFID) technology has rich applications in cyber-physical systems, such as warehouse management and supply chain control. Often in practice, tags are attached to objects belonging to different groups, which may be different product types/manufacturers in a warehouse or different book categories in a library. As RFID technology evolves from single-group to multiple-group systems, there arise several interesting problems. One of them is to identify the popular groups, whose numbers of tags are above a pre-defined threshold. Another is to estimate arbitrary moments of the group size distribution, such as sum, variance, and entropy for the sizes of all groups. In this paper, we consider a new problem which is to estimate all these statistical metrics simultaneously in a time-efficient manner without collecting any tag IDs. We solve this problem by a protocol named generic moment estimator (GME), which allows the tradeoff between estimation accuracy and time cost. According to the results of our theoretical analysis and simulation studies, this GME protocol is several times or even orders of magnitude more efficient than a baseline protocol that takes a random sample of tag groups to estimate each group size.
We study two well-known planar visibility problems, namely visibility testing and visibility counting, in a model where there is uncertainty about the input data. The standard versions of these problems are defined as...
详细信息
We study two well-known planar visibility problems, namely visibility testing and visibility counting, in a model where there is uncertainty about the input data. The standard versions of these problems are defined as follows: we are given a set S of n segments in R-2, and we would like to preprocess S so that we can quickly answer queries of the form: is the given query segment s is an element of S visible from the given query point q is an element of R-2 (for visibility testing) and how many segments in S are visible from the given query point q is an element of R-2 (for visibility counting). In our model of uncertainty, each segment may or may not exist, and if it does, it is located in one of finitely many possible locations, given by a discrete probability distribution. In this setting, the probabilistic visibility testing problem (PVTP, for short) is to compute the probability that a given segment s is an element of S is visible from a given query point q and the probabilistic visibility counting problem (PVCP, for short) is to compute the expected number of segments in S that are visible from a query point q. We first show that PVTP is #P-complete. In the special case where uncertainty is only about whether segments exist and not about their location, we show that PVTP is solvable in O(n logn) time. Our algorithm for PVTP combined with linearity of expectation gives an O(n(2) logn) time algorithm for PVCP. Using the algorithm for PVTP, together with a few old tricks, we can show that one can preprocess S in O(n(5) logn) time into a data structure of size O(n(4)), so that each PVTP query for a fixed segments can be answered in O (logn) time. We also give a faster 2-approximation algorithm for this problem. At the end, we improve the approximation factor of the algorithm. (C) 2019 Elsevier B.V. All rights reserved.
We prove an inequality on decision trees on monotonic measures which generalizes the OSSS inequality on product spaces. As an application, we use this inequality to prove a number of new results on lattice spin models...
详细信息
We prove an inequality on decision trees on monotonic measures which generalizes the OSSS inequality on product spaces. As an application, we use this inequality to prove a number of new results on lattice spin models and their random-cluster representations. More precisely, we prove that For the Potts model on transitive graphs, correlations decay exponentially fast for beta < beta(c.) For the random-cluster model with cluster weight q >= 1 on transitive graphs, correlations decay exponentially fast in the subcritical regime and the cluster-density satisfies the mean-field lower bound in the supercritical regime. For the random-cluster models with cluster weight q >= 1 on planar quasi-transitive graphs G, p(c)(G)p(c)(G*)/(1 - P-c(G)) (1- p(c)(G*)) = q. As a special case, we obtain the value of the critical point for the square, triangular and hexagonal lattices. (This provides a short proof of a result of Beffara and the first author dating from 2012.) These results have many applications for the understanding of the subcritical (respectively disordered) phase of all these models. The techniques developed in this paper have potential to be extended to a wide class of models including the Ashkin-Teller model, continuum percolation models such as Voronoi percolation and Boolean percolation, super-level sets of massive Gaussian free field, and the random-cluster and Potts models with infinite range interactions.
暂无评论