Current distributions in networks of overhead and buried conductors, energized by current injections at arbitrary locations, are computed by two approaches. The first approach solves electric field point matching equa...
详细信息
Current distributions in networks of overhead and buried conductors, energized by current injections at arbitrary locations, are computed by two approaches. The first approach solves electric field point matching equations in a weighted least square formulation with linear constraints on the currents. The second approach, specialized to the low frequency range and lossy media, employs a power minimization algorithm. Both approaches lead to geometrically invariant, linearly constrained, quadratic minimization problems. At low frequencies, induced loop currents are determined by explicitly impo8ing Faraday's law as a linear constraint. Computation results compare well with measurements and with results of other algorithms. At high frequencies, the computation results match those published in the antenna literature. At low frequencies, they are essentially identical to published measurements and to computations based on grounding and quasi-static techniques.
We present a novel way of considering in-network computing (INC), using ideas from statistical mechanics. We model the execution of a distributed computation with graphs called functional topologies, which allows us t...
详细信息
We present a novel way of considering in-network computing (INC), using ideas from statistical mechanics. We model the execution of a distributed computation with graphs called functional topologies, which allows us to provide a formal definition for degeneracy and redundancy in the context of INC. Degeneracy for INC is defined as the structural multiplicity of possible options available within the network to perform the same function with a given macroscopic property (e.g., delay). Two degenerate structures can partially overlap. Redundancy, on the other hand, does not allow overlapping between the functional graphs. We present an efficient algorithm to determine all these multiple options and compute both degeneracy and redundancy. Our results show that by exploiting the set of possible degenerate alternatives, we can significantly improve the successful computation rate of a symmetric function, while still being able to satisfy requirements such as delay or energy consumption.
In studies of sequential detection of radar signals, the parameter of primary interest is the length of the sequential test, denoted by n . Since this test length is a random variable, moments and/or probability distr...
详细信息
In studies of sequential detection of radar signals, the parameter of primary interest is the length of the sequential test, denoted by n . Since this test length is a random variable, moments and/or probability distribution functions of n are desirable. A procedure is described in this communication for obtaining exact probability distribution functions P(n) and exact average values of n , E(n) , when the input to the sequential processor is discrete radar data (radar data in quantized form). This procedure is based upon the representation of the sequential test as a Markov process. The results are quite general in that they apply to multilevel quantization of the data. However, the procedure appears especially attractive when the number of levels is small as is usually the case when dealing with discrete radar data. The procedure for determining exact distribution functions and average values of n presented herein is compared with the Wald-Girshick approach for obtaining P(n) and E(n) , and the superiority of the former approach in computational convenience is indicated.
Cloud-computing technologies and their application are becoming increasingly popular, which improves both enterprises' and individuals' working efficiency while at the same time greatly reducing users' cos...
详细信息
Cloud-computing technologies and their application are becoming increasingly popular, which improves both enterprises' and individuals' working efficiency while at the same time greatly reducing users' cost. Besides, the scale of cloud platform and its application are rapidly expanding. Yet it's a challenging task to effectively utilize resource and guarantee quality of services to users. The quality of cloud task scheduling algorithm plays a key role in it. For one thing, traditional rule-based scheduling algorithms like FCFS and priority-based always focus on the algorithm itself instead of considering characteristics of Virtual Machines (VMs) and task, finally leading to poor operation effect. For another, one carefully selects a set of features based on sample data and employs machine-learning algorithms to train a scheduling policy. This method has the following deficiencies: quality of manually selected sample features directly affects that of the scheduling algorithm;many effective scheduling algorithms are based on a large number of labeled samples;however, it is very difficult to acquire these samples in reality;trained scheduling algorithms are often applicable only to specific environments and easy to be damaged. For the deficiencies of traditional scheduling algorithm and based on deep reinforcement learning (DRL) model, this paper presents a new-type model-free and end-to-end task scheduling agent which can interact with cloud environment and output the information of the virtual machine executing the task while inputting the original tasks of the cloud platform. The agent learns scheduling knowledge through the execution of tasks, and optimizes its scheduling policy. This algorithm completely solves the deficiencies of traditional scheduling algorithms like lower adaptability and flexibility, providing brand-new feasible solutions for task scheduling methods under cloud environments.
The article focuses on the development of MonALISA, or Monitoring Agents using a Large Integrated Services Architecture, which is an application for controlling and monitoring large-scale distributed data-processing s...
详细信息
The article focuses on the development of MonALISA, or Monitoring Agents using a Large Integrated Services Architecture, which is an application for controlling and monitoring large-scale distributed data-processing systems. The MonALISA framework, which was created by the California Institute of Technology's high energy physics group and has four layers of network look-up services, is explained. Research strategy focused on synergetics for computing, network infrastructure, applications, and storage facilities. The use of MonaALISA by the European Organization for Nuclear Research is mentioned.
Peer-to-peer (P2P) networks are social networks for pooling network and information resources and are considered superior conduits for distributed computing and data management. In this paper, we utilize the theories ...
详细信息
Peer-to-peer (P2P) networks are social networks for pooling network and information resources and are considered superior conduits for distributed computing and data management. In this paper, we utilize the theories of social networks and economic incentives to investigate the formation of P2P networks with rational participating agents (active peers). The paper proposes a framework for multilevel formation dynamics, including an individual level (content-sharing decision and group selection) and a group level (membership admission, splitting, and interconnection). It is found that if the network size (the number of peer nodes) is sufficiently large, the stable (self-selected equilibrium) free-riding ratio could be nonzero, contrary to the common belief that everybody should free ride. The efficient (welfare-maximizing) free-riding ratio is not necessarily zero;that is, a certain degree of free riding is beneficial and should be tolerated. The sharing level in a network increases (decreases) with the download (upload) capacities of its peer nodes. In addition, the heterogeneity of content availability and upload capacity discourages sharing activities. Although the sharing level of a stable group is typically lower than that of an efficient group, the self-formed network may have a larger or smaller group size than what is efficient, depending on the structure of the group admission decision process. It is also observed that self-organized interconnections among groups lead to network inefficiency because the network may be over- or underlinked. To recover the efficiency loss during the formation process, we propose internal transfer mechanisms to force stable networks to become efficient.
A simple general method for performing Metropolis Monte Carlo condensed matter simulations on parallel processors is examined. The method is based on the cyclic generation of temporary discrete domains within the syst...
详细信息
A simple general method for performing Metropolis Monte Carlo condensed matter simulations on parallel processors is examined. The method is based on the cyclic generation of temporary discrete domains within the system, which are separated by distances greater than the inter-particle interaction range. Particle configurations within each domain are then sampled independently by an assigned processor, whilst particles outside these domains are held fixed. Results for a simulated Lennard-Jones fluid confirm that the method rigorously satisfies the detailed balance condition, and that the efficiency of configurational sampling scales almost linearly with the number of processors. Furthermore, the number of iterations performed on a given processor can be essentially arbitrary, with very low levels of inter-process communication. Provided the CPU time per step is not state-dependent, the method can then be used to perform large calculations as unsupervised background tasks on heterogeneous networks. (C) 2003 Elsevier B.V. All rights reserved.
Clusters of computers have emerged as mainstream parallel and distributed platforms for high-performance, high-throughput and high-availability computing. To enable effective resource management on clusters, numerous ...
详细信息
Clusters of computers have emerged as mainstream parallel and distributed platforms for high-performance, high-throughput and high-availability computing. To enable effective resource management on clusters, numerous cluster management systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users' quality of service requirements. It is intended to work as an add-on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the Portable Batch System. The scheduler offers market-based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user-perceived value (utility), determined by their budget and deadline rather than system performance considerations. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared with system-centric scheduling strategies. Copyright (C) 2004 John Wiley Sons, Ltd.
This note considers the problem of synchronizing a network of digital clocks: the clocks all run at the same rate, however, an initial state of the network may place the clocks in arbitrary phases. The problem is to d...
详细信息
This note considers the problem of synchronizing a network of digital clocks: the clocks all run at the same rate, however, an initial state of the network may place the clocks in arbitrary phases. The problem is to devise a protocol to advance or retard clocks so that eventually all clocks are in phase. The solutions presented in this note are protocols in which all processes are identical and use a constant amount of space per process. One solution is a deterministic protocol for a tree network;another solution is a probabilistic protocol for a network of arbitrary topology.
In this visionary article, we present the concept of federated accountability, an innovative approach that distributes accountability-related computation and data across the compute continuum. To demonstrate the feasi...
详细信息
In this visionary article, we present the concept of federated accountability, an innovative approach that distributes accountability-related computation and data across the compute continuum. To demonstrate the feasibility and versatility of our approach, we developed a prototype using blockchain technology that serves as a tangible illustration of how federated accountability can be applied across various domains.
暂无评论