The Wang-Landau algorithm is a flat-histogram Monte Carlo method that performs random walks in the configuration space of a system to obtain a close estimation of the density of states iteratively. It has been applied...
详细信息
The Wang-Landau algorithm is a flat-histogram Monte Carlo method that performs random walks in the configuration space of a system to obtain a close estimation of the density of states iteratively. It has been applied successfully to many research fields. In this paper, we propose a parallel implementation of the Wang-Landau algorithm on computers of shared memory architectures by utilizing the OpenMP API for distributed computing. This implementation is applied to Ising model systems with promising speedups. We also examine the effects on the running speed when using different strategies in accessing the shared memory space during the updating procedure. The allowance of data race is recommended in consideration of the simulation efficiency. Such treatment does not affect the accuracy of the final density of states obtained. (C) 2008 Elsevier B.V. All rights reserved.
This paper describes a security model developed from empirical data collected from a realistic intrusion experiment in which a number of undergraduate students were invited to attack a distributed computer system. Rel...
详细信息
This paper describes a security model developed from empirical data collected from a realistic intrusion experiment in which a number of undergraduate students were invited to attack a distributed computer system. Relevant data, with respect to their intrusion activities, were recorded continuously. We have worked out a hypothesis on typical attacker behavior based on experiences from this and other similar experiments. The hypothesis suggests that the attacking process can be split into three phases: the learning phase, the standard attack phase and the innovative attack phase. The probability for successful attacks during the learning phase is expected to be small and, if a breach occurs, it is rather a result of pure luck than deliberate action. During the standard attack phase, this probability is considerably higher, whereas it decreases again in the innovative attack phase. The collected data indicates that the breaches during the standard attack phase are statistically equivalent. Furthermore, the times between breaches seem to be exponentially distributed, which means that traditional methods for reliability modelling of component failures may be applicable.
This paper deals with the problem of store-and-forward deadlock prevention in store-and-forward networks. The presented solution uses time stamping of all messages in the network, and a nonpreemptable message exchange...
详细信息
This paper deals with the problem of store-and-forward deadlock prevention in store-and-forward networks. The presented solution uses time stamping of all messages in the network, and a nonpreemptable message exchange mechanism. By combining these ideas, a new distributed flow control procedure is derived which guarantees that all messages are delivered to their own destinations, thus avoiding both deadlock and livelock without any message loss. It is shown that some properties of this procedure depend on the policy of the allocation of exchange buffers to nodes. On the one hand, an optimal allocation strategy is presented which results in a maximally optimal deadlock prevention procedure. The procedure is network sizeand topology-independent and allows unrestricted packet routing. On the other hand, the allocation of one exchange buffer per node is discussed, which, even if not optimal, makes the derived deadlock prevention procedure completely independent of network reconfigurations. The last feature is extremely important from the practical point of view and, therefore, such a solution is strongly recommended. When compared to store-and-forward deadlock prevention procedures described so far, which lack some or all of these desirable properties, the procedure presented here behaves favorably. However, it imposes other drawbacks, i.e., the possibility of extra hops as a result of exchange operations. It is argued that this drawback appears rarely in practice, and some strategies which aim at a reduction of it are proposed.
In the deregulated environment, information is the key to secure operation, profitability, customer retention, market advantage, and growth for the power industry. The rapid development of the Internet and distributed...
详细信息
In the deregulated environment, information is the key to secure operation, profitability, customer retention, market advantage, and growth for the power industry. The rapid development of the Internet and distributed computing have opened the door for feasible and cost-effective solutions. This article describes and demonstrates a unique Internet-based application in a substation automation system that is implemented based on the existing system control and data acquisition (SCADA) system and very large-scale integration (VLSI) information technologies (IT). The user can view the real-time data superimposed on one-line diagrams generated automatically using VLSI placement and routing techniques. In addition, the user can also control the operation of the substation at the server site. The choice of Java technologies, such as Java Native Interface (JNI), Java Remote Method Invocation (RMI), and Enterprise Java Bean (EJB), offers unique and powerful features, such as zero client installation, on-demand access, platform independence, and transaction management for the design of the online SCADA display system.
Synchrotron X-ray microdiffraction (mu XRD) services are conducted for industrial minerals to identify their crystal impurities in terms of crystallinity and potential impurities. mu XRD services generate huge loads o...
详细信息
Synchrotron X-ray microdiffraction (mu XRD) services are conducted for industrial minerals to identify their crystal impurities in terms of crystallinity and potential impurities. mu XRD services generate huge loads of images that have to be screened before further processing and storage. However, there are insufficient effective labeled samples to train a screening model since service consumers are unwilling to share their original experimental images. In this article, we propose a physics law-informed federated learning (FL) based mu XRD image screening method to improve the screening while protecting data privacy. In our method, we handle the unbalanced data distribution challenge incurred by service consumers with different categories and amounts of samples with novel client sampling algorithms. We also propose hybrid training schemes to handle asynchronous data communications between FL clients and servers. The experiments show that our method can ensure effective screening for industrial users conducting industrial material testing while keeping commercially confidential information.
On-line monitoring can complement formal techniques to increase application dependability. This tutorial outlines the concepts and identifies the activities that comprise event-based monitoring, describing several rep...
详细信息
On-line monitoring can complement formal techniques to increase application dependability. This tutorial outlines the concepts and identifies the activities that comprise event-based monitoring, describing several representative monitoring systems.
Mutual exclusion and concurrency are two fundamental issues in distributedsystems. Recently Yuh-Jzer Joung introduced the asynchronous group mutual exclusion problem, alsocalled the congenial talking philosophers prob...
详细信息
Mutual exclusion and concurrency are two fundamental issues in distributedsystems. Recently Yuh-Jzer Joung introduced the asynchronous group mutual exclusion problem, alsocalled the congenial talking philosophers problem, which aims at mutual exclusion while exploringconcurrency: "The problem concerns a set of n philosophers who spend their time thinking alone andtalking in a forum. Given that there is only one meeting room, a philosopher attempting to attend aforum can succeed only if the meeting room is empty (and in this case the philosopher starts theforum), or some philosopher interested in the same forum is already in the meeting room (and in thiscase the philosopher joins this ongoing forum). The problem is to design an algorithm for thephilosophers to ensure that a philosopher attempting to attend a forum will eventually succeed,while at the same time encourage philosophers interested in the same forum to be in the meeting roomsimultaneously".
Compute-intensive applications have gradually changed focus from massively parallel supercomputers to capacity as a resource obtained on-demand. This is particularly true for the large-scale adoption of cloud computin...
详细信息
Compute-intensive applications have gradually changed focus from massively parallel supercomputers to capacity as a resource obtained on-demand. This is particularly true for the large-scale adoption of cloud computing and MapReduce in industry, while it has been difficult for traditional high-performance computing (HPC) usage in scientific and engineering computing to exploit this type of resources. However, with the strong trend of increasing parallelism rather than faster processors, a growing number of applications target parallelism already on the algorithm level with loosely coupled approaches based on sampling and ensembles. While these cannot trivially be formulated as MapReduce, they are highly amenable to throughput computing. There are many general and powerful frameworks, but in particular for sampling-based algorithms in scientific computing there are some clear advantages from having a platform and scheduler that are highly aware of the underlying physical problem. Here, we present how these challenges are addressed with combinations of dataflow programming, peer-to-peer techniques and peer-to-peer networks in the Copernicus platform. This allows automation of sampling-focused workflows, task generation, dependency tracking, and not least distributing these to a diverse set of compute resources ranging from supercomputers to clouds and distributed computing (across firewalls and fragile networks). Workflows are defined from modules using existing programs, which makes them reusable without programming requirements. The system achieves resiliency by handling node failures transparently with minimal loss of computing time due to checkpointing, and a single server can manage hundreds of thousands of cores e.g. for computational chemistry applications. (C) 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license
As the modern computing market experiences a surge in demand for efficient data-management solutions, challenges posed by centralized storage systems become more pronounced, especially with the proliferation of Intern...
详细信息
As the modern computing market experiences a surge in demand for efficient data-management solutions, challenges posed by centralized storage systems become more pronounced, especially with the proliferation of Internet of Things devices. Centralized storage, although cost-effective, faces issues of scalability, performance bottlenecks, and security vulnerabilities. With decentralized storage, data are distributed across nodes, offering redundancy, data availability, and enhanced security. Unfortunately, decentralized storage introduces its own challenges, such as complex data retrieval processes, potential inconsistencies in data versions, and difficulties in ensuring data privacy and integrity in a distributed setup. Effectively managing these challenges calls for innovative techniques. In response, this paper introduces a decentralized storage system that melds cloud-native concepts with blockchain technology. The proposed design delivers enhanced scalability, data security, and privacy. When operating on a containerized edge infrastructure, this storage system provides higher data-transfer speeds than the interplanetary file system. This research thus blends the advantages of cloud-native frameworks with the security mechanisms of blockchain, crafting a storage system that addresses the present-day challenges of data management in decentralized settings.
This paper describes a self-stabilizing version of an algorithm presented by A. Mazurkiewicz [Inform. Process. Lett. 61 (1997) 233-239] for enumerating nodes by local rules on an anonymous network. The result improves...
详细信息
This paper describes a self-stabilizing version of an algorithm presented by A. Mazurkiewicz [Inform. Process. Lett. 61 (1997) 233-239] for enumerating nodes by local rules on an anonymous network. The result improves the reliability aspects of the original algorithm and underlines the importance of a non-ambiguous topology for a network. (C) 2001 Elsevier Science B.V. All rights reserved.
暂无评论