“Time-dependent” semantics are given for the programming language occam. Using such semantics, behavioural analysis of distributed real-time computations written in occam can be made. The weakest pre-condition, whic...
详细信息
“Time-dependent” semantics are given for the programming language occam. Using such semantics, behavioural analysis of distributed real-time computations written in occam can be made. The weakest pre-condition, which reflects the ‘latest point’ in ‘time’ to start the computation such that a proper termination is guaranteed within a pre-specified time TG, may then be derived.
This paper presents a distributed computational framework for stochastic convex optimization problems using the so-called scenario approach. Such a problem arises, for example, in a large-scale network of interconnect...
详细信息
This paper presents a distributed computational framework for stochastic convex optimization problems using the so-called scenario approach. Such a problem arises, for example, in a large-scale network of interconnected linear systems with local and common uncertainties. Due to the large number of required scenarios to approximate the stochasticity of these problems, the stochastic optimization involves formulating a large-scale scenario program, which is in general computationally demanding. We present two novel ideas in this paper to address this issue. We first develop a technique to decompose the large-scale scenario program into distributed scenario programs that exchange a certain number of scenarios with each other to compute local decisions using the alternating direction method of multipliers (ADMM). We show the exactness of the decomposition with a-priori probabilistic guarantees for the desired level of constraint fulfillment for both local and common uncertainty sources. As our second contribution, we develop a so-called soft communication scheme based on a set parametrization technique together with the notion of probabilistically reliable sets to reduce the required communication between the subproblems. We show how to incorporate the probabilistic reliability notion into existing results and provide new guarantees for the desired level of constraint violations. Two different simulation studies of two types of interconnected network, namely dynamically coupled and coupling constraints, are presented to illustrate advantages of the proposed distributed framework.
With the rapid growth of rich-media content over the Internet, content and service providers (SP) are increasingly facing the problem of managing their service resources cost-effectively while ensuring a high quality ...
详细信息
With the rapid growth of rich-media content over the Internet, content and service providers (SP) are increasingly facing the problem of managing their service resources cost-effectively while ensuring a high quality of service (QoS) delivery at the same time. In this research we conceptualize and model an Internet-based storage provisioning network for rich-media content delivery. This is modeled as a capacity provision network (CPN) where participants possess service infrastructures and leverage their topographies to effectively serve specific customer segments. A CPN is a network of SPs coordinated through an allocation hub. We first develop the notion of discounted QoS capabilities of storage resources. We then investigate the stability of the discount factors over time and the network topography using a test-bed on the Internet through a longitudinal empirical study. Finally, we develop a market maker mechanism for optimal multilateral allocation and surplus sharing in a network. The proposed CPN is closely tied to two fundamental properties of Internet service technology: positive network externality among cooperating SPs and the property of effective multiplication of capacity allocation among several distributed service sites. We show that there exist significant incentives for SPs to engage in cooperative allocation and surplus sharing. We further demonstrate that intermediation can enhance the allocation effectiveness and that the opportunity to allocation and surplus sharing can play an important role in infrastructure planning. In conclusion, this study demonstrates the practical business viability of a cooperative CPN market.
An important problem in distributed programming is identifying the termination of a distributed computation in which a main computation process signals other processes to become active. Once all processes have become...
详细信息
An important problem in distributed programming is identifying the termination of a distributed computation in which a main computation process signals other processes to become active. Once all processes have become idle, the computation is terminated. An effective detection procedure requires: 1. that processes be modified independently of their definitions, 2. that the procedure does not delay the main computation, and 3. that no new communication channels be established between processes. A termination detection algorithm for distributed computations is presented to satisfy these requirements. With the algorithm, signals are generated throughout the spanning tree, initially inward from leaves to root; then, if termination is not detected, signals are generated outward from root to leaves. This process is repeated until termination is detected.
The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of ca...
详细信息
The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of causality and its relation to logical time is given;some fundamental results concerning the characterization of causality are presented. Recent work on the detection of causal relationships in distributed computations is surveyed. The issue of observing distributed computations in a causally consistent way and the basic problems of detecting global predicates are discussed. To illustrate the major difficulties, some typical monitoring and debugging approaches are assessed, and it is demonstrated how their feasibility is severely limited by the fundamental problem to master the complexity of causal relationships.
Data assimilation is a core component of numerical weather prediction systems. The large quantity of data processed during assimilation requires the computation to be distributed across increasingly many compute nodes...
详细信息
The paper presents model desribing the dynamics of the background load for the computers in the network. For that model two problems of the task allocation in the network are formulated: the problem of the stochastic ...
详细信息
ISBN:
(纸本)9783642314995;9783642315008
The paper presents model desribing the dynamics of the background load for the computers in the network. For that model two problems of the task allocation in the network are formulated: the problem of the stochastic control based on the background load model and the stochastic control based on the Markov Decision Process Theory. The open-loop and closed-loop control algorithms based on the stochastic forecast and Markov Decision Process Theory respectively are presented.
The past few years have seen the development of distributed computing platforms designed to utilize the spare processor cycles of a large number of personal computers attached to the Internet in an effort to generate ...
详细信息
ISBN:
(纸本)0769519407
The past few years have seen the development of distributed computing platforms designed to utilize the spare processor cycles of a large number of personal computers attached to the Internet in an effort to generate levels of computing power normally achieved only with expensive supercomputers. Such large scale distributed computations running in untrusted environments raise a number of security concerns, including the potential for intentional or unintentional corruption of computations, and for participants to claim credit for computing that has not been completed This paper presents two strategies for hardening selected applications that utilize such distributed computations. Specifically, we show that carefully seeding certain tasks with precomputed data can significantly increase resistance to cheating (claiming credit for work not computed) and incorrect results. Similar results are obtained for sequential tasks through a strategy of sharing the computation of N tasks among K > N nodes. In each case, the associated cost is significantly less than the cost of assigning tasks redundantly.
Volunteer distributed computations utilize spare processor cycles of personal computers that are connected to the Internet. The related computation integrity concerns are commonly addressed by assigning tasks redundan...
详细信息
ISBN:
(纸本)9780780394858
Volunteer distributed computations utilize spare processor cycles of personal computers that are connected to the Internet. The related computation integrity concerns are commonly addressed by assigning tasks redundantly. Aside from the additional computational costs, a significant disadvantage of redundancy is its vulnerability to colluding adversaries. This paper presents a tunable redundancy-based task distribution strategy that increases resistance to collusion while significantly decreasing the associated computational costs. Specifically, our strategy guarantees a desired cheating detection probability regardless of the number of copies of a specific task controlled by the adversary. Though not the first distribution scheme with these properties, the proposed method improves upon existing strategies in that it requires fewer computational resources. More importantly, the strategy provides a practical lower bound for the number of redundantly assigned tasks required to achieve a given detection probability.
computational Intelligence methods are widely used in a diverse range of scientific and industrial applications, including economic modeling, finance, networks and transportation, database design, design and control, ...
详细信息
ISBN:
(纸本)9781509032310
computational Intelligence methods are widely used in a diverse range of scientific and industrial applications, including economic modeling, finance, networks and transportation, database design, design and control, scheduling and others. On the other hand, Collective Intelligence methods aim at harvesting the power of people's mobile devices for gathering and sending useful information about the values and the variations of environmental variables in a target area or the macroscopic behavior of a target population. In this paper we propose a computation paradigm, which combines computational, collective intelligence and privacy respecting techniques, towards the creation of a flexible, low-cost, secure, massively parallel virtual computing platform based on people's own devices. Examples of how such an infrastructure can be the platform of the deployment of innovative, computationally demanding, applications and services that demand peer-to-peer connections and users' device collaboration are presented. The privacy respecting techniques enhance people's trust in providing their data through their personal devices.
暂无评论