distributed1 processingtechniques are presented, which distribute a compute intensive neural network application on two computer systems, one, a heterogeneous network of workstations, and the second, a homogeneous co...
详细信息
ISBN:
(纸本)1932415262
distributed1 processingtechniques are presented, which distribute a compute intensive neural network application on two computer systems, one, a heterogeneous network of workstations, and the second, a homogeneous computer cluster. The results obtained from executing the software in both systems showed a considerable reduction in the execution time, in comparison to using one computer. For the heterogeneous network with 18 computers, the execution time was reduced 17.7 times, from 53.75 minutes to 3.03 minutes, compared to the slowest machine. For the homogeneous cluster with eight compute nodes, the time reduction was 7.74 times from 26.3 minutes with one CPU to 3.4 minutes with eight. The results show that for this application distributedprocessing provides excellent speedup in comparison to a single computer.
Accumulated cost surfaces (ACSs) are a tool for spatial modelling used in a number of fields. Some relevant. applications, especially in the areas of multi-criteria evaluation and spatial optimization, require the ava...
详细信息
ISBN:
(纸本)9781467387767
Accumulated cost surfaces (ACSs) are a tool for spatial modelling used in a number of fields. Some relevant. applications, especially in the areas of multi-criteria evaluation and spatial optimization, require the availability of several ACSs on the same raster, which may result in a significant computational cost. In this paper, we discuss some techniques available in the literature for accelerating the ACS computation using graphics processing units (GPUs) and CUDA. Also, we illustrate in details a new CUDA algorithm suitable for the computation of multiple ACSs. Moreover, we present some preliminary results on a test case, including an experimental comparison against a fast sequential implementation running on a CPU.
The main data-driven techniques for detecting cybersecurity attacks are based on the analysis of network traffic data and/or of application/system logs (stored in a host or in some other kind of device). A wide range ...
详细信息
ISBN:
(数字)9781728165820
ISBN:
(纸本)9781728165820
The main data-driven techniques for detecting cybersecurity attacks are based on the analysis of network traffic data and/or of application/system logs (stored in a host or in some other kind of device). A wide range of machine-learning techniques (and possible alternative configurations of them) have been proposed in the literature so far, for this purpose, but none of them has been proven to consistently overcome the others across different datasets. In order to ensure better accuracy and stability, the ensemble paradigm can be exploited as an effective solution for combining such techniques. However, as attack detection problems are hard to cope with and, usually, entail the analysis of large and fast streams of data, different types of ensemble (and of base algorithms composing the ensemble) should have experimented, exploiting distributed architecture to suitably reduce the high-execution times necessary to run them. In order to handle all these issues, a p2p environment to validate ensemble-based approaches in the cybersecurity domain is proposed in this paper. Two case studies are analyzed by using this framework, which concern the detection of intrusions in network-traffic data and of deviant process instances. Preliminary scalability results demonstrate that the framework is a viable solution for these challenging kind of problems.
A Dedicated distributed Memory Server, or DDMS, is a distributed global memory architecture that inherently supports distributed shared memory, remote memory paging, and temporary file systems. It also provides a solu...
详细信息
ISBN:
(纸本)1892512416
A Dedicated distributed Memory Server, or DDMS, is a distributed global memory architecture that inherently supports distributed shared memory, remote memory paging, and temporary file systems. It also provides a solution to the problem of node memory insufficiency in networks of workstations. This paper investigates the effects of attaching stand-alone memory servers directly to the network. An implementation of a DDMS is utilized to analyze server performance for distributed shared memory applications and applications with exceptionally large memory requirements. We examine application performance behavior before and after the network reaches bandwidth saturation.
There is a new kind of LR parallel languages/grammars, suitable for parallel parsing. Such grammars utilizes lookback information of the lookback string having a relationship to the limited history, and they simultane...
详细信息
ISBN:
(纸本)1892512416
There is a new kind of LR parallel languages/grammars, suitable for parallel parsing. Such grammars utilizes lookback information of the lookback string having a relationship to the limited history, and they simultaneously use the lookahead string. The basic properties of LRP(q, k) grammars, such as: their LR nature, suitability for deterministic parallel parsing, and both length q of the lookback string and k of the lookahead string are reflected in the name of the new subclass of LR grammars. The parallel parsing simulation allows the testing of new ideas on nowadays personal computers. Since the implementation is done in Java programming language, it can work in various operating systems. The user interface and output formats are user friendly and suitable for publishing.
A new computational model, called a linear array with a reconfigurable pipelined bus system (LARPBS), has been proposed as a feasible and efficient parallel computational model based on current optical technologies. I...
详细信息
A new computational model, called a linear array with a reconfigurable pipelined bus system (LARPBS), has been proposed as a feasible and efficient parallel computational model based on current optical technologies. In this paper, we further study this model by proposing several basic data movement operations on the model. These operations include broadcast, multicast, compression, split, binary prefix sum, maximum finding. Using these basic operations, several image processing algorithms are also presented for the model. We show that all algorithms can be executed efficiently on the LARPBS model. II is our hope that the LARPBS model can be used as a new and practical parallel computational model for designing parallel algorithms. (C) 1998 Elsevier Science Inc, All rights reserved.
A variety of bounds on running time are proven in this paper for the problem of solving triangular linear systems on a k-Dimensional torus. The bounds are applicable for solvers utilizing the substitution method. Both...
详细信息
ISBN:
(纸本)1892512416
A variety of bounds on running time are proven in this paper for the problem of solving triangular linear systems on a k-Dimensional torus. The bounds are applicable for solvers utilizing the substitution method. Both upper and lower bounds are provided in order to determine the overall parallel complexity of the problem.
In this paper, we present a novel buffer management algorithm for multimedia streaming workload. We carefully examine the workload traces obtained from several streaming servers in service. The analysis results show t...
详细信息
ISBN:
(纸本)1892512459
In this paper, we present a novel buffer management algorithm for multimedia streaming workload. We carefully examine the workload traces obtained from several streaming servers in service. The analysis results show that most users exhibit non-sequential access pattern using VCR-like operations such as jump backward and jump forward. Moreover, short jump accesses are shown to be common. We exploit the workload characteristics of the VCR-like operation and develop a buffer caching algorithm called Virtual Interval Caching scheme. Experimental results show that the proposed buffer management scheme yields better performance than the legacy schemes.
作者:
Kim, KAjou Univ
Grad Sch Lib & Informat Sci Suwon 442749 South Korea
In the packet switching network, congestion is unavoidable and affects QoS of real-time traffic such as delay and packet loss. Fair queueing algorithms are well-known solutions for QoS guarantees by scheduling packets...
详细信息
ISBN:
(纸本)1932415262
In the packet switching network, congestion is unavoidable and affects QoS of real-time traffic such as delay and packet loss. Fair queueing algorithms are well-known solutions for QoS guarantees by scheduling packets. These algorithms are calculating virtual finish time of each real-time packet and sorting packets in the increasing order of the virtual finish time stamp. By digitizing the virtual finish time, we can devise a new efficient queueing discipline (DDQ) and ease the computing complexity of the traffic control algorithm with relatively small error of digitization resolution. Also, our proposed algorithm is suitable for large scale QoS routers where multiple processors are cooperating for routing and scheduling. DDQ algorithm is adapted for distributed routing system. We present the router architecture and identify the busiest processor in the routing system. We use distributed DDQ algorithm to lower the work load of the busiest processor.
In this study we have investigated the robustness of memory and inter-processor communication models of VSTM. Our results indicate that the system performs well with a limited number of distributed clients using a sha...
详细信息
ISBN:
(纸本)1932415262
In this study we have investigated the robustness of memory and inter-processor communication models of VSTM. Our results indicate that the system performs well with a limited number of distributed clients using a shared memory model. However, as the number of clients increase, server tasks demand more resources while communication and DMA protocols begin to show delay and cause hang-ups. To avoid this problem, the system is being re-implemented in a distributed memory model using message passing. Although partially implemented, the system already has demonstrated increased capacity for workload tolerance. In this paper we discuss the experimental results that point to these conclusions and discuss future implementation plans.
暂无评论