Crossed-magnetic-field effects on bulk high-temperature superconductors have been studied both experimentally and numerically. The sample geometry investigated involves finite-size effects along both (crossed-)magneti...
Crossed-magnetic-field effects on bulk high-temperature superconductors have been studied both experimentally and numerically. The sample geometry investigated involves finite-size effects along both (crossed-)magnetic-field directions. The experiments were carried out on bulk melt-processed Y-Ba-Cu-O single domains that had been premagnetized with the applied field parallel to their shortest direction (i.e., the c axis) and then subjected to several cycles of the application of a transverse magnetic field parallel to the sample ab plane. The magnetic properties were measured using orthogonal pickup coils, a Hall probe placed against the sample surface, and magneto-optical imaging. We show that all principal features of the experimental data can be reproduced qualitatively using a two-dimensional finite-element numerical model based on an E−J power law and in which the current density flows perpendicularly to the plane within which the two components of magnetic field are varied. The results of this study suggest that the suppression of the magnetic moment under the action of a transverse field can be predicted successfully by ignoring the existence of flux-free configurations or flux-cutting effects. These investigations show that the observed decay in magnetization results from the intricate modification of current distribution within the sample cross section. The current amplitude is altered significantly only if a field-dependent critical current density Jc(B) is assumed. Our model is shown to be quite appropriate to describe the cross-flow effects in bulk superconductors. It is also shown that this model does not predict any saturation of the magnetic induction, even after a large number (∼100) of transverse field cycles. These features are shown to be consistent with the experimental data.
It has been considered bon ton to blame locks for their fragility, especially since researchers identified obstruction-freedom: a progress condition that precludes locking while being weak enough to raise the hope for...
详细信息
ISBN:
(纸本)1595933840
It has been considered bon ton to blame locks for their fragility, especially since researchers identified obstruction-freedom: a progress condition that precludes locking while being weak enough to raise the hope for good performance. This paper attenuates this hope by establishing lower bounds on the complexity of obstruction-free implementations in contention-free executions: those where obstruction-freedom was precisely claimed to be effective. Through our lower bounds, we argue for an inherent cost of concurrent computing without locks. We first prove that obstruction-free implementations of a large class of objects, using only overwriting or trivial primitives in contention-free executions, have Ω(n) space complexity and Ω(log2 n) (obstruction-free) step complexity. These bounds apply to implementations of many popular objects, including variants of fetch&add, counter, compare&swap, and LL/SC. When arbitrary primitives can be applied in contention-free executions, we show that, in any implementation of binary consensus, or any perturbable object, the number of distinct base objects accessed and memory stalls incurred by some process in a contention free execution is Ω(√n). All these results hold regardless of the behavior of processes after they become aware of contention. We also prove that, in any obstruction-free implementation of a perturbable object in which processes are not allowed to fail their operations, the number of memory stalls incurred by some process that is unaware of contention is Ω(n). Copyright 2006 ACM.
Restoration or protection switching mechanisms protect traffic in packet-switched networks against local outages by deviating it around the failure location. This assures connectivity, but sufficient backup capacity i...
详细信息
Restoration or protection switching mechanisms protect traffic in packet-switched networks against local outages by deviating it around the failure location. This assures connectivity, but sufficient backup capacity is also needed to maintain quality of service (QoS) for the duration of the outage. To that end, sufficient capacity must be provided on the links so that the network can survive a set of protected failure scenarios without congestion due to redirected traffic. The self- protecting multipath (SPM) is a protection switching mechanism for which the required backup capacity can be minimized by linear optimization methods. However, unprotected multi-failures may lead to congestion in such "resilient networks" since backup capacity may be missing. In this paper, we quantify and compare the impact of unprotected double failures on the QoS for the optimized SPM, single shortest path routing (SSP), and equal- cost multipath (ECMP) routing.
In this paper we present a novel framework supporting distributed network management using a self-organizing peer-to-peer over-lay network. The overlay consists of several distributed Network Agents which can perform ...
详细信息
In daily life, humans must compensate for the resultant forces arising form interaction with the physical environment. Recent studies have shown that humans can acquire a neural representation of the relation between ...
详细信息
ISBN:
(纸本)9781622766468
In daily life, humans must compensate for the resultant forces arising form interaction with the physical environment. Recent studies have shown that humans can acquire a neural representation of the relation between motor command and movement, i.e. learn an internal model of the environment dynamics. The present paper discusses whether humans can identify one side of dynamics from the mixed environment dynamics in the case where humans have experienced either of them.
Whiplash PCR-based methods of biomolecular computation (BMC), while highly-versatile in principle, are well-known to suffer from a simple but serious form of self-poisoning known as back-hybridization. In this work, a...
详细信息
ISBN:
(数字)9783540684237
ISBN:
(纸本)9783540490241
Whiplash PCR-based methods of biomolecular computation (BMC), while highly-versatile in principle, are well-known to suffer from a simple but serious form of self-poisoning known as back-hybridization. In this work, an optimally re-engineered WPCR-based architecture, Displacement Whiplash PCR (DWPCR) is proposed and experimentally validated. DWPCR's new rule protect biostep, which is based on the primer-targeted strand-displacement of back-hybridized hairpins, renders the most recently implemented rule-block of each strand unavailable, abolishing back-hybridization after each round of extension. In addition to attaining a near-ideal efficiency, DWPCR's ability to support isothermal operation at physiological temperatures eliminates the need for thermal cycling, and opens the door for potential biological applications. DWPCR should also be capable of supporting programmable exon shuffling, allowing XWPCR, a proposed method for programmable protein evolution, to more closely imitate natural evolving systems. DWPCR is expected to realize a highly-efficient, versatile platform for routine and efficient massively parallel BMC.
The abundant computing resources available on the Internet has made grid computing over the Internet a viable solution, to scientific problems. The dynamic nature of the Internet necessitates dynamic reconfigurability...
详细信息
The abundant computing resources available on the Internet has made grid computing over the Internet a viable solution, to scientific problems. The dynamic nature of the Internet necessitates dynamic reconfigurability of applications to handle failures and varying loads. Most of the existing grid solutions handle reconfigurability to a limited extent. These systems lack appropriate support to handle the failure of key-components, like coordinators, essential for the computational model. We propose a two layered peer-to-peer middleware, Vishwa, to handle reconfiguration of the application in the face of failure and system load. The two-layers, task management layer and reconfiguration layer, are used in conjunction by the applications to adapt and mask node failures. We show that our system is able to handle the failures of the key-components of a computation model. This is demonstrated in the case studies of two computational models, namely bag of tasks and connected problems, with an appropriate example for each
The IEEE 802.11 MAC-protocol, often used in ad-hoc networks, has the tendency to share the capacity equally amongst the active nodes, irrespective of their loads. An inherent drawback of this fair-sharing policy is th...
详细信息
In all short-medium term business/market forecasts, the efficient provision of digital multimedia broadcasting (DMB) services in mobile systems appears to be of key importance for mobile operators and broadcasters. Se...
详细信息
暂无评论