The characters of more high speed computing and much less low power dissipation are needed to settle for convolutional encodes. In this paper, we present a parallel method for convolutional encodes with SMIC 0.35 mu m...
详细信息
ISBN:
(纸本)9783037856949
The characters of more high speed computing and much less low power dissipation are needed to settle for convolutional encodes. In this paper, we present a parallel method for convolutional encodes with SMIC 0.35 mu m CMOS technology;hardware design and VLSI implementation of this algorithm are also presented. Use this method, parallel circuits structure can be easily designed, which take on excellent characters of more high speed computing and low power dissipation compared with traditional serial shift register structure for convolutional encodes.
We design a new interconnection network topology and a custom routing algorithm, which targets solving new challenging issues posed by recent advanced studies in the areas of massively parallelcomputing and large-sca...
详细信息
ISBN:
(纸本)9781479913497;9781479913503
We design a new interconnection network topology and a custom routing algorithm, which targets solving new challenging issues posed by recent advanced studies in the areas of massively parallelcomputing and large-scale data centers. We follow the design principles of Distributed Shortcut networks (DSN) [1], which construct non-random topologies with the creation of long-range Shortcuts inspired by observations in small-world networks. As a result, our new DSN-alpha networks performs significantly better than the basic DSN in term of communication latency while provide an surprisingly load balance which helps the network become robust against burst of traffic demand while topology-agnostic deadlock-free routing (e.g. the up*/down*) suffers a lot.
Cloud computing can be online based network engineering which contributed with a rapid advancement at the progress of communication technological innovation by supplying assistance to clients of assorted conditions wi...
详细信息
ISBN:
(纸本)9781665466431
Cloud computing can be online based network engineering which contributed with a rapid advancement at the progress of communication technological innovation by supplying assistance to clients of assorted conditions with aid from online computing sources. It's terms of hardware and software apps together side software growth testing and platforms applications because tools. Large-scale heterogeneous distributed computing surroundings give the assurance of usage of a huge quantity of computing tools in a comparatively low price. As a way to lessen the software development and setup onto such complicated surroundings, high speed parallel programming languages exist which have to be encouraged by complex operating techniques. There are numerous advantages for consumers in terms of cost and flexibility that come with Cloud computing's anticipated uptake. Building on well-established research in Internet solutions, networks and utility computing, virtualization et cetera Service-Oriented Architectures and the Internet of Services (IoS) have implications for a wide range of technological issues such as parallelcomputing and load balancing as well as high availability and scalability. Effective load balancing methods are essential to solving these issues. Since such systems' size and complexity make it impossible to concentrate job execution on a few select servers, a parallel distributed solution is required. Adaptive task load model is the name of the method wesuggest in our article for balancing the workload (ATLM). We developed an adaptive parallel distributed computing paradigm as a result of this (ADPM). While still maintaining the model's integrity, ADPM employs a more flexible synchronization approach to cut down on the amount of time synchronous operations use. As well as the ATLM load balancing technique, which solves the straggler issue caused by the performance disparity between nodes, ADPM also applies it to ensure model correctness. The results indicate that
The proteomics data analysis pipeline based on the shotgun method requires efficient data processing methods. The parallel algorithm of mass spectrometry database search faces the problems of rapidly expanding databas...
详细信息
DIMMnet-1 is a high performance network interface for PC clusters that can be directly plugged into the DIMM slot of a PC. By using both low latency AOTF (Atomic On-The-Fly) sending and high bandwidth BOTF (Block On-T...
详细信息
ISBN:
(纸本)0769517307;0769517315
DIMMnet-1 is a high performance network interface for PC clusters that can be directly plugged into the DIMM slot of a PC. By using both low latency AOTF (Atomic On-The-Fly) sending and high bandwidth BOTF (Block On-The-Fly) sending, it can overcome the overhead caused by standard 110 such as the PCI bus. Two types of DIMMnet-1 prototype boards (providing optical and electrical network interfaces) containing a Martini network interface controller chip are currently available. They can be plugged into a 100MHz DIMM slot of a PC with a Pentium-3, Pentium-4 or Athlon processor. The round-trip time for AOTF on this incompletely tuned DIMMnet-1 is 7.5 times faster than Myrinet2000. The barrier synchronization time for AOTF is 4 times faster than that of an SR8000 supercomputer The inter-two-node floating sum operation time is 1903 ns. This shows that DIMMnet-1 holds promise for applications in which scalable performance with traditional approaches is difficult because of frequent data exchange.
Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computin...
详细信息
ISBN:
(纸本)9781479930807
Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.
Spiking Neural network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potent...
详细信息
ISBN:
(纸本)9781509061822
Spiking Neural network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potentially appealing for implementation of low-power neural computing platforms. However, the parallel and memory-intensive computations involved in such algorithms is in complete contrast to the sequential fetch, decode, execute cycles of conventional von-Neumann processors. Recent proposals have investigated the design of spintronic "in-memory" crossbar based computing architectures driving "spin neurons" that can potentially alleviate the memory-access bottleneck of CMOS based systems and simultaneously offer the prospect of low-power inner product computations. In this article, we perform a rigorous system-level simulation study of such All-Spin Spiking Neural networks on a benchmark suite of 6 recognition problems ranging in network complexity from 10k-7.4M synapses and 195-9.2k neurons. System level simulations indicate that the proposed spintronic architecture can potentially achieve similar to 1292x energy efficiency and similar to 235x speedup on average over the benchmark suite in comparison to an optimized CMOS implementation at 45nm technology node.
Traditional TCP/IP network model needs an end-to-end connection for data transmission. With this traditional network, it is difficult to communicate in those areas where intermittent connectivity exists. Delay Toleran...
详细信息
The parallel hybrid inverse neural network coordinate approximations algorithm (PHINNCA) for solution of large-scale global optimization problems is proposed in this work. The algorithm maps a trial value of an object...
详细信息
ISBN:
(纸本)9783642032745
The parallel hybrid inverse neural network coordinate approximations algorithm (PHINNCA) for solution of large-scale global optimization problems is proposed in this work. The algorithm maps a trial value of an objective function into values of objective function arguments. It decreases a trial value step by step to find a global minimum. Dual generalized regression neural networks are used to perform the mapping. The algorithm is intended for cluster systems. A search is carried out concurrently. When there are multiple processes, they share the information about their progress and apply a simulated annealing procedure to it.
Efficient synchronization is one of the basic requirements of effective parallelcomputing. A key operation of the POSIX Thread standard (PThread) is barrier synchronization, where multiple threads block on a user-spe...
详细信息
暂无评论