Motor current signature analysis is a modern approach to fault diagnose and classification for induction motors. Many studies reported successful implementation of MCSA in laboratory situations whereas the method was ...
详细信息
ISBN:
(纸本)9781479952007
Motor current signature analysis is a modern approach to fault diagnose and classification for induction motors. Many studies reported successful implementation of MCSA in laboratory situations whereas the method was not so successful in real industrial situation due to propagation of neighbor faults and unwanted noise signals. This paper investigate the correlation between different observations of events in order to provide a more accurate estimation of behavior of electrical motors at a given site. An analytical framework has been implemented to correlate and classify independent fault observations and diagnose the type and identify the origin of fault symptoms. The fault diagnosis algorithm has two layers. Initially outputs of all sensors are processed to generate fault indicators. These fault indicators then are to be classified using an Artificial Neural Network A typical industrial site is taken as a case study and simulated to evaluate the concept of distributed fault analysis.
Network processors (NP) are usually designed to exploit packet level parallelism where system throughput is aggregated by multiple CPU cores. A serious problem in such a system is packet disordering (PD), which may de...
详细信息
ISBN:
(纸本)0769523129
Network processors (NP) are usually designed to exploit packet level parallelism where system throughput is aggregated by multiple CPU cores. A serious problem in such a system is packet disordering (PD), which may deteriorate network performance greatly. To address the PD problem, a practical reordering mechanism is put forward in this paper. Different from the traditional reordering methods such as in ATM networks, it uses a central-controlled chain-based mechanism to implement packet reordering with flow granularity. We verify it in FPGA and carry out a series of experiments, where the results show that system throughput can be influenced greatly by traffic patterns. We also demonstrate that the reordering with flow granularity is quite requisite in NP, which can increase the system throughput to a great extent compared to the conventional method with global granularity.
This paper presents the design and implementation of a grid-computing environment for mining sequential patterns. An Apriori-like algorithm for mining sequential patterns is deployed in the proposed grid-computing env...
详细信息
This paper presents the design and implementation of a grid-computing environment for mining sequential patterns. An Apriori-like algorithm for mining sequential patterns is deployed in the proposed grid-computing environment. Apriori-like algorithm is not of very high performance in comparison to others but it is more convenient to be realized for distributed processing in a grid computing environment due to its loosely coupled processes. Two types of grids are designed, the computing grid and data grid, in the proposed environment. All grids are installed with full functions, each of which is wrapped by Globus toolkit. Grid services are invoked by the users or other grids and able to respond to the invoking side. There are 10 computers serving as grid nodes each of which is equipped with different hardware components and is distributed on two campuses. The experimental results show that the proposed grid-computing environment provides a flexible and efficient platform for mining sequential patterns from large datasets.
Benefiting from the cutting-edge supercomputers that support extremely large-scale scientific simulations, climate research has advanced significantly over the past decades. However, new critical challenges have arise...
详细信息
ISBN:
(数字)9798350387117
ISBN:
(纸本)9798350387124
Benefiting from the cutting-edge supercomputers that support extremely large-scale scientific simulations, climate research has advanced significantly over the past decades. However, new critical challenges have arisen regarding efficiently storing and transferring large-scale climate data among distributed repositories and databases for post hoc analysis. In this paper, we develop CliZ, an efficient online error-controlled lossy compression method with optimized data prediction and encoding methods for climate datasets across various climate models. On the one hand, we explored how to take advantage of particular properties of the climate datasets (such as mask-map information, dimension permutation/fusion, and data periodicity pattern) to improve the data prediction accuracy. On the other hand, CliZ features a novel multi-Huffman encoding method, which can significantly improve the encoding efficiency. Therefore significantly improving compression ratios. We evaluated CliZ versus many other state-of-the-art error-controlled lossy compressors (including SZ3, ZFP, SPERR, and QoZ) based on multiple real-world climate datasets with different models. Experiments show that CliZ outperforms the second-best compressor (SZ3, SPERR, or QoZ1.1) on climate datasets by 20%-200% in compression ratio. CliZ can significantly reduce the data transfer cost between the two remote Globus endpoints by 32%-38%.
We present a simple but general model for feature-based information processing with selective attention. We model feature extraction as projections onto frames of subspaces, which accounts for redundancies in the repr...
详细信息
We present a simple but general model for feature-based information processing with selective attention. We model feature extraction as projections onto frames of subspaces, which accounts for redundancies in the representations of individual features as well as between features. To manage limited resources, we use feedback attentional signals to dynamically allocate system resources according to the observed events. In our model, attention maximizes the average information retained about all events weighted by their relative priorities. We illustrate the model with a simple system under a total bit constraint and discuss how the organization of the feature extraction affects the optimal bit allocation
The need for open computer support for cooperative work (CSCW) systems is examined. Their requirements and impact on distributed systems are discussed. The Mocca project for developing an environment which will suppor...
详细信息
The need for open computer support for cooperative work (CSCW) systems is examined. Their requirements and impact on distributed systems are discussed. The Mocca project for developing an environment which will support open CSCW systems is described. It is shown that the open distributed systems (ODP) experience and approach can be a valuable tool in realizing open CSCW systems. Similarly, the ODP standardization effort can benefit both from the experience of CSCW application developers and from the requirements which CSCW systems place on their distributed platform.< >
Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a lar...
详细信息
Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation- and memory-intensive nature, real-world physics systems (e.g., multi-meson or multi-baryon systems) explored by Lattice QCD prefer to leverage multi-GPUs. Different from general graph processing, many-body correlation function calculations show two specific features: a large number of computation-/data-intensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the data-intensive nature of many-body correlation function calculations. However, existing graph-based multi-GPU schedulers cannot capture these data-centric features, thus resulting in a sub-optimal performance for many-body correlation function calculations. To address this issue, this paper presents a multi-GPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work first performs a comprehensive study on the interplay of data reuse and load balance, and designs two new concepts: local reuse pattern and reuse bound to study the opportunity of achieving the optimal trade-off between them. Based on this study, MICCO proposes a heuristic scheduling algorithm and a machine-learning-based regression model to generate the optimal setting of reuse bounds. Specifically, MICCO is integrated into a real-world Lattice QCD system, Redstar, for the first time running on multiple GPU
One type of neurocomputer recently proposed, the folded-array digital neural emulator using tree accumulation and communication structures, incorporates a new concept in representing an artificial digital neuron. Begi...
详细信息
One type of neurocomputer recently proposed, the folded-array digital neural emulator using tree accumulation and communication structures, incorporates a new concept in representing an artificial digital neuron. Beginning from the parallel distributed processing (PDP) neuron model, the folded-array digital neural emulator is briefly described. Then by applying the folded-array concepts to the PDP model, the folded axon/dendrite tree neuron is created which, in a general form, represents a new model for the neural paradigm.< >
Complex applications today involve multiple processes, multiple threads of control, distributed processing, thread pools, event handling, messages. The behaviors and misbehaviors of these nondeterministic, message-bas...
详细信息
Complex applications today involve multiple processes, multiple threads of control, distributed processing, thread pools, event handling, messages. The behaviors and misbehaviors of these nondeterministic, message-based systems are difficult to capture and understand. The typical approach is to trace the behavior of the systems and track how the different incoming messages are processed throughout the system. While messages between processes can be captured automatically at the network or library level, tracing the message processing within a system, which is often more complex and error-prone, requires the programmer to manually instrument the code by identifying the different message handlers, thread states, processing stages, and shared queues accurately and completely. In this paper we show how dynamic analysis can be used to automatically identify the transactions, stages and shared queues in Java programs as a prelude to trace-based comprehension.
暂无评论