Wireless Ad-hoc and Sensor Networks are the cornerstone of decentralised control and optimisation techniques in numerous sensor-rich application areas. Triggered by the necessity of autonomous operation within constan...
详细信息
Wireless Ad-hoc and Sensor Networks are the cornerstone of decentralised control and optimisation techniques in numerous sensor-rich application areas. Triggered by the necessity of autonomous operation within constantly changing environments, Wireless Ad-hoc and Sensor Networks are characterised by dynamic topologies, regardless the mobility attributes of their operational nodes. As such, the relative awareness that each node can obtain of the entire network draws the roadmap of viable reconfiguration mechanisms, such as the establishment of bidirectional connectivity. The issues addressed in this paper are related to the bidirectional connectivity conditions over Wireless Ad-hoc and Sensor Networks. Based solely on the relative awareness that each node has of the entire network, sufficient end-to-end connectivity conditions are herein extracted. These conditions, exploiting the notion of relative Delaunay neighbourhoods, formulate the basis of a transmission power adjustment scheme. Without any additional network overhead, the resulting Relative Delaunay Connectivity Algorithm is herein proven to yield an efficient solution to the connectivity issues. Extensive simulation results are offered to evaluate the performance of the network, resulting from the proposed transmission range adjustment, whilst highlighting the benefits of the Relative Delaunay Connectivity Algorithm.
This paper presents a complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) method to treat light-matter interaction self-consistently with electromagnetic field evolution for...
详细信息
This paper presents a complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) method to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices. The active medium (AM) is modeled using an efficient multi-level system of carrier rate equations. To include AM in the CE-ADI formulation, a 1st-order differential system composed of CE fields in AM is first set up. The system sub-matrices are then determined and used in an efficient ADI splitting formula. From microdisk laser simulations, the proposed method is shown to consume 22% of the explicit FDTD CPU time.
Traditionally, the "best effort, cost free" model of Supercomputers/Grids does not consider pricing. Clouds have progressed towards a service-oriented paradigm that enables a new way of service provisioning ...
详细信息
Traditionally, the "best effort, cost free" model of Supercomputers/Grids does not consider pricing. Clouds have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on "pay-as-you-go" model. Large scale many-task workflow (MTW) may be suited for execution on Clouds due to its scale-* requirement (scale up, scale out, and scale down). In the context of scheduling, MTW execution cost must be considered based on users' budget constraints. In this paper, we address the problem of scheduling MTW on Clouds and present a budget-conscious scheduling algorithm, referred to as ScaleStar (or Scale-*). ScaleStar assigns the selected task to a virtual machine with higher comparative advantage which effectively balances the execution time-and-monetary cost goals. In addition, according to the actual charging model, an adjustment policy, refer to as DeSlack, is proposed to remove part of slack without adversely affecting the overall makespan and the total monetary cost. We evaluate ScaleStar with an extensive set of simulations and compare with the most popular HEFT-based LOSS3 algorithm and demonstrate the superior performance of ScaleStar.
This paper presents an efficient parallel algorithm for a new class of min-max problems based on the matrix multiplicative weights update method. Our algorithm can be used to find near-optimal strategies for competiti...
详细信息
This paper presents an efficient parallel algorithm for a new class of min-max problems based on the matrix multiplicative weights update method. Our algorithm can be used to find near-optimal strategies for competitive two-player classical or quantum games in which a referee exchanges any number of messages with one player followed by any number of additional messages with the other. This algorithm considerably extends the class of games which admit parallel solutions, demonstrating for the first time the existence of a parallel algorithm for a game in which one player reacts adaptively to the other. As a consequence, we prove that several competing-provers complexity classes collapse to PSPACE such as QRG(2), SQG and two new classes called DIP and DQIP. A special case of our result is a parallel approximation scheme for a new class of semi definite programs whose feasible region consists of lists of semi definite matrices that satisfy a ``transcript-like'' consistency condition. Applied to this special case, our algorithm yields a direct polynomial-space simulation of multi-message quantum interactive proofs resulting in a first-principles proof of QIP=PSPACE.
One of the main goals in the smart grid development is the continuous evolution of the smart grid as a cyber-physical system to accommodate electricity storage, provide safe power delivery, accommodate the needs of th...
详细信息
ISBN:
(纸本)9781467351461
One of the main goals in the smart grid development is the continuous evolution of the smart grid as a cyber-physical system to accommodate electricity storage, provide safe power delivery, accommodate the needs of the local communities, facilitate the integration of innovative technologies, and enable active participation of consumers. The main purpose of this paper is to analyze the underlying complexities of the relationship between the consumers and providers in the smart grid in order to facilitate its planning and evolution. We elicit and specify the main actors, processes, and sequences of actions that occur in the smart grid to help smart grid designers properly plan its infrastructure and evolution.
With the all pervasive presence of computers to all aspects of life, software reliability assessment is assuming a position of utmost importance. Moreover many commercial and governmental software systems require high...
详细信息
With the all pervasive presence of computers to all aspects of life, software reliability assessment is assuming a position of utmost importance. Moreover many commercial and governmental software systems require high mission reliability requiring both hardware and the software to be very reliable. Software reliability is acknowledged to perk up with the amount of testing efforts invested, which in turn reduces the cost of software development and in turn system cost. The scale of redundancy employed affects reliability favorably, while increasing the cost of software design and development. This paper employs an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP) for software systems. Herein, an ant colony optimization algorithm for the software RAP (SRAP) is devised and tested on a computer relay software that is employed for fault handling in power system transmission lines and the results presented validates the efficacy of the approach.
A novel fault-tolerant adaptive wormhole routing function for Networks-on-Chips (NoCs) is presented. The routing function guarantees absence of deadlocks and livelocks up to two faulty channels. The routing logic does...
详细信息
A novel fault-tolerant adaptive wormhole routing function for Networks-on-Chips (NoCs) is presented. The routing function guarantees absence of deadlocks and livelocks up to two faulty channels. The routing logic does not require reconfiguration when a fault occurs. The routes themselves are dynamic. Based on the faults in the network, alternative routes are used to reroute packets. Routing decisions are based only on local knowledge, which allows for fast switching. Our approach does not use any costly virtual channels. As we do not prohibit cyclic dependencies, the routing function provides minimal routing from source to destination even in the presence of faults. We have implemented the architecture design using synthesizable HDL. To ensure deadlock freedom, we have extended a formally verified deadlock detection algorithm to deal with fault tolerant designs. For a 20×20 mesh, we have formally proven deadlock freedom of our design in all of the 2,878,800 configurations in which two channels are faulty. We supply experimental results showing the performance of our architecture.
The AAAI-11 workshop program was held Sunday and Monday, August 7-18, 2011, at the Hyatt Regency San Francisco in San Francisco, California USA. The AAAI-11 workshop program included 15 workshops covering a wide range...
详细信息
We consider the problem of cardinality penalized optimization of a convex function over the probability simplex with additional convex constraints. The classical ?_1 regularizer fails to promote sparsity on the probab...
详细信息
ISBN:
(纸本)9781627480031
We consider the problem of cardinality penalized optimization of a convex function over the probability simplex with additional convex constraints. The classical ?_1 regularizer fails to promote sparsity on the probability simplex since ?_1 norm on the probability simplex is trivially constant. We propose a direct relaxation of the minimum cardinality problem and show that it can be efficiently solved using convex programming. As a first application we consider recovering a sparse probability measure given moment constraints, in which our formulation becomes linear programming, hence can be solved very efficiently. A sufficient condition for exact recovery of the minimum cardinality solution is derived for arbitrary affine constraints. We then develop a penalized version for the noisy setting which can be solved using second order cone programs. The proposed method outperforms known rescaling heuristics based on ?_1 norm. As a second application we consider convex clustering using a sparse Gaussian mixture and compare our results with the well known soft k-means algorithm.
Progressive refinement is a methodology that makes it possible to elegantly integrate scalable data compression, access, and presentation into one approach. Specifically, this paper concerns the effective use of progr...
详细信息
Progressive refinement is a methodology that makes it possible to elegantly integrate scalable data compression, access, and presentation into one approach. Specifically, this paper concerns the effective use of progressive parallel coordinates (PPCs), utilized routinely for high-dimensional data visualization. It discusses how the power of the typical stages of progressive data visualization can also be utilized fully for PPCs. Further, different implementations of the underlying methods and potential application domains are described. The paper also presents empirical results concerning the benefits of PPC with regard to efficient data management and improved presentation, indicating that the proposed approach is able to close the gap between data handling and visualization.
暂无评论