Reachability routing is a newly emerging paradigm in networking, where the goal is to determine all paths between a sender and a receiver. It is becoming relevant with the changing dynamics of the Internet and the eme...
详细信息
Reachability routing is a newly emerging paradigm in networking, where the goal is to determine all paths between a sender and a receiver. It is becoming relevant with the changing dynamics of the Internet and the emergence of low-bandwidth wireless/ad hoc networks. This thesis presents the case for reinforcement learning (RL) as the framework of choice to realize reachability routing, within the confines of the current Internet backbone infrastructure. The setting of the reinforcement learning problem offers several advantages, including loop resolution, multi-path forwarding capability, cost-sensitive routing, and minimizing state overhead, while maintaining the incremental spirit of the current backbone routing algorithms. We present the design and implementation of a new reachability algorithm that uses a model-based approach to achieve cost-sensitive multi-path forwarding. Performance assessment of the algorithm in various troublesome topologies shows consistently superior performance over classical reinforcement learning algorithms. Evaluations of the algorithm based on different criteria on many types of randomly generated networks as well as realistic topologies are presented.
This paper presents the SR-GCWS-CS probabilistic algorithm that combines Monte Carlo simulation with splitting techniques and the Clarke and Wright savings heuristic to find competitive quasi-optimal solutions to the ...
详细信息
This paper presents the SR-GCWS-CS probabilistic algorithm that combines Monte Carlo simulation with splitting techniques and the Clarke and Wright savings heuristic to find competitive quasi-optimal solutions to the Capacitated Vehicle Routing Problem (CVRP) in reasonable response times. The algorithm, which does not require complex fine-tuning processes, can be used as an alternative to other metaheuristics such as Simulated Annealing, Tabu Search, Genetic algorithms, Ant Colony Optimization or GRASP, which might be more difficult to implement and which might require non-trivial fine-tuning processes when solving CVRP instances. As discussed in the paper, the probabilistic approach presented here aims to provide a relatively simple and yet flexible algorithm which benefits from: (a) the use of the geometric distribution to guide the random search process, and (b) efficient cache and splitting techniques that contribute to significantly reduce computational times. The algorithm is validated through a set of CVRP standard benchmarks and competitive results are obtained in all tested cases. Future work regarding the use of parallel programming to efficiently solve large-scale CVRP instances is discussed. Finally, it is important to notice that some of the principles of the approach presented here might serve as a base to develop similar algorithms for other routing and scheduling combinatorial problems. Journal of the Operational Research Society (2011) 62, 1085-1097. doi: 10.1057/jors.2010.29 Published online 19 May 2010
This article addresses the problem of self-tuning the data placement in replicated key-value stores. The goal is to automatically optimize replica placement in a way that leverages locality patterns in data accesses, ...
详细信息
This article addresses the problem of self-tuning the data placement in replicated key-value stores. The goal is to automatically optimize replica placement in a way that leverages locality patterns in data accesses, such that internode communication is minimized. To do this efficiently is extremely challenging, as one needs not only to find lightweight and scalable ways to identify the right assignment of data replicas to nodes but also to preserve fast data lookup. The article introduces new techniques that address these challenges. The first challenge is addressed by optimizing, in a decentralized way, the placement of the objects generating the largest number of remote operations for each node. The second challenge is addressed by combining the usage of consistent hashing with a novel data structure, which provides efficient probabilistic data placement. These techniques have been integrated in a popular open-source key-value store. The performance results show that the throughput of the optimized system can be six times better than a baseline system employing the widely used static placement based on consistent hashing.
The IEEE 1394 Root Contention Protocol is an industrial leader election algorithm for two processes in which probability, real time and parameters play an important role. This protocol has been analysed in various cas...
详细信息
The IEEE 1394 Root Contention Protocol is an industrial leader election algorithm for two processes in which probability, real time and parameters play an important role. This protocol has been analysed in various case studies, using a variety of verification and analysis methods. In this paper, we survey and compare several of these case studies.
This paper analyses how the symmetry of a processor network influences the existence of a solution for the network orientation problem. The orientation of hypercubes and tori is the problem of assigning labels to each...
详细信息
This paper analyses how the symmetry of a processor network influences the existence of a solution for the network orientation problem. The orientation of hypercubes and tori is the problem of assigning labels to each link of each processor, in such a way that a sense of direction is given to the network. In this paper the problem of network orientation for these two topologies is studied under the assumption that the network contains a single leader, under the assumption that the processors possess unique identities, and under the assumption that the network is anonymous. The distinction between these three models is considered fundamental in distributed computing. It is shown that orientations can be computed by deterministic algorithms only when either a leader or unique identities are available. Orientations can be computed for anonymous networks by randomized algorithms, but only when the number of processors is known. When the number of processors is not known, even randomized algorithms cannot comp...
We present two probabilistic leader election algorithms for anonymous unidirectional rings with FIFO channels, based on an algorithm from Itai and Rodeh [14]. In contrast to the Itai-Rodeh algorithm, our algorithms ar...
详细信息
We present two probabilistic leader election algorithms for anonymous unidirectional rings with FIFO channels, based on an algorithm from Itai and Rodeh [14]. In contrast to the Itai-Rodeh algorithm, our algorithms are finite-state. So they can be analyzed using explicit state space exploration;we used the probabilistic model checker PRISM to verify, for rings up to size four, that eventually a unique leader is elected with probability one.
Modern hyperspectral images, especially acquired in remote sensing and from on-field measurements, can easily contain from hundreds of thousands to several millions of pixels. This often leads to a quite long computat...
详细信息
Modern hyperspectral images, especially acquired in remote sensing and from on-field measurements, can easily contain from hundreds of thousands to several millions of pixels. This often leads to a quite long computational time when, eg, the images are decomposed by Principal Component Analysis (PCA) or similar algorithms. In this paper, we are going to show how randomization can tackle this problem. The main idea is described in detail by Halko et al in 2011 and can be used for speeding up most of the low-rank matrix decomposition methods. The paper explains this approach using visual interpretation of its main steps and shows how the use of randomness influences the speed and accuracy of PCA decomposition of hyperspectral images. This paper shows how randomization can speed up principal component analysis of hyperspectral images and how it influences the speed and accuracy of the decomposition.
Examines the framework for research in the theory of complexity of computations. Emphasis on the interrelation between seemingly diverse problems and methods; Evaluation of an algebraic expression; Cost functions of a...
详细信息
Examines the framework for research in the theory of complexity of computations. Emphasis on the interrelation between seemingly diverse problems and methods; Evaluation of an algebraic expression; Cost functions of algorithms.
Certain types of routing, scheduling, and resource-allocation problems in a distributed setting can be modeled as edge-coloring problems. We present fast and simple randomized algorithms for edge coloring a graph in t...
详细信息
Certain types of routing, scheduling, and resource-allocation problems in a distributed setting can be modeled as edge-coloring problems. We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed point-to-point model of computation. Our algorithms compute an edge coloring of a graph G with n nodes and maximum degree Delta with at most 1.6 Delta + O(log(1+delta) n) colors with high probability (arbitrarily close to 1) for any fixed delta > 0;they run in polylogarithmic time. The upper bound on the number of colors improves upon the (2 Delta - 1)-coloring achievable by a simple reduction to vertex coloring. To analyze the performance of our algorithms, we introduce new techniques for proving upper bounds on the tail probabilities of certain random variables. The Chernoff-Hoeffding bounds are fundamental tools that are used very frequently in estimating tail probabilities. However, they assume stochastic independence among certain random variables, which may not always hold. Our results extend the Chernoff-Hoeffding bounds to certain types of random variables which are not stochastically independent. We believe that these results are of independent interest and merit further study.
In real-world tasks, obtaining a large set of noise-free data can be prohibitively expensive. Therefore, recent research tries to enable machine learning to work with weakly supervised datasets, such as inaccurate or ...
详细信息
In real-world tasks, obtaining a large set of noise-free data can be prohibitively expensive. Therefore, recent research tries to enable machine learning to work with weakly supervised datasets, such as inaccurate or incomplete data. However, the previous literature treats each type of weak supervision individually, although, in most cases, different types of weak supervision tend to occur simultaneously. Therefore, in this article, we present Smart MEnDR, a Classification Model that applies Ensemble Learning and Data-driven Rectification to deal with inaccurate and incomplete supervised datasets. Themodel first applies a preliminary phase of ensemble learning in which the noisy data points are detected while exploiting the unlabelled data. The phase employs a semi-supervised technique with maximum likelihood estimation to decide on the disagreement rate. Second, the proposed approach applies an iterative meta-learning step to tackle the problem of knowing which points should be made correct to improve the performance of the final classifier. To evaluate the proposed framework, we report the classification performance, noise detection, and the labelling accuracy of the proposed method against state-of-the-art techniques. The experimental results demonstrate the effectiveness of the proposed framework in detecting noise, providing correct labels, and attaining high classification performance.
暂无评论