We studied routing protocols for Delay Tolerant networks devised to improve the message delivery performance in natural disaster scenarios. In this paper we propose the Min-Visited protocol which during the transitive...
详细信息
We studied routing protocols for Delay Tolerant networks devised to improve the message delivery performance in natural disaster scenarios. In this paper we propose the Min-Visited protocol which during the transitive path to the message destination, selects the next node based on two features: (1) the most distant neighbor, and (2) the largest number of encounters with the destination node of the message. We compare our protocol with well-known protocols of the technical literature. The results show that the proposed protocol presents a low workload overhead with a number of hops lower than 2, and in average 95% of the messages are successfully delivered.
This paper presents NOA-AID a network architecture for targeting highly distributed systems, composed of a large set of distributed stream processing devices, aimed at adaptive information indexing, aggregation and di...
详细信息
ISBN:
(数字)9783319751788
ISBN:
(纸本)9783319751788;9783319751771
This paper presents NOA-AID a network architecture for targeting highly distributed systems, composed of a large set of distributed stream processing devices, aimed at adaptive information indexing, aggregation and discovery in streams of data. The architecture is organized on two layers. The upper layer is aimed at supporting the information discovery process by providing a distributed index structure. The lower layer is mainly devoted to resource aggregation based on epidemic protocols targeting highly distributed and dynamic scenarios, well suited to stream-oriented scenarios. We present a theoretical study on the costs of information management operations, also giving an empirical validation of such findings. Finally, we presented an experimental evaluation of the ability of our solution to be effective and efficient in retrieving meaningful information in streams on a highly-dynamic and distributed scenario.
distributed publish/subscribe middleware insures the necessary decoupling, expressiveness, and scalability for modern distributed applications. Unfortunately, the performance of this middleware is usually degraded in ...
详细信息
ISBN:
(纸本)9781509060580
distributed publish/subscribe middleware insures the necessary decoupling, expressiveness, and scalability for modern distributed applications. Unfortunately, the performance of this middleware is usually degraded in the presence of highly mobile scenarios. In this paper, we tackle the problem of mobility in publish/subscribe by exploiting a predictive scheme. We investigate the adequacy of our approach using a prototype implementation with respect to different scenarios. The experimental results show that our approach reduces the caching cost, the propagation cost and the network load. It also achieves better results in terms of overhead.
The goal of this paper is to ascertain with what accuracy the direction of Bitcoin price in USD can be predicted. The price data is sourced from the Bitcoin Price Index. The task is achieved with varying degrees of su...
详细信息
The goal of this paper is to ascertain with what accuracy the direction of Bitcoin price in USD can be predicted. The price data is sourced from the Bitcoin Price Index. The task is achieved with varying degrees of success through the implementation of a Bayesian optimised recurrent neural network (RNN) and a Long Short Term Memory (LSTM) network. The LSTM achieves the highest classification accuracy of 52% and a RMSE of 8%. The popular ARIMA model for time series forecasting is implemented as a comparison to the deep learning models. As expected, the non-linear deep learning methods outperform the ARIMA forecast which performs poorly. Finally, both deep learning models are benchmarked on both a GPU and a CPU with the training time on the GPU outperforming the CPU implementation by 67.7%.
With the development of image acquisition and storage technology, the image data is greatly increased. How to process the increasing image data quickly has become the main problem of image processing. In this paper, t...
详细信息
ISBN:
(数字)9781728126166
ISBN:
(纸本)9781728126173
With the development of image acquisition and storage technology, the image data is greatly increased. How to process the increasing image data quickly has become the main problem of image processing. In this paper, the image data are processed by K-means algorithm in Python language environment, and the original image and the image processed by K-means algorithm are classified and trained in convolution neural network. The experimental results show that the time consumed by the image processed by K-means algorithm is 20 s to 30 s less than that of the original image in convolution neural network. It can effectively improve the efficiency of image processing.
Asynchronous iterations can be used to implement fixed-point methods such as Jacobi and Gauss-Seidel on parallel computers with high synchronization costs. However, they are rarely considered in practice due to the lo...
详细信息
ISBN:
(纸本)9781509060580
Asynchronous iterations can be used to implement fixed-point methods such as Jacobi and Gauss-Seidel on parallel computers with high synchronization costs. However, they are rarely considered in practice due to the low convergence rate. This paper describes an implementation on GPUs of a novel Power Flow analysis model using asynchronous iterations. We present our model for the solution of the Power Flow analysis problem, prove its convergence and evaluate its performance for a GPU execution.
This work describes an efficient implementation for solving tridiagonal systems on Graphics processing Units (GPUs). The Wang and Mou algorithm has a computation and communication pattern that matches very well to the...
详细信息
This work describes an efficient implementation for solving tridiagonal systems on Graphics processing Units (GPUs). The Wang and Mou algorithm has a computation and communication pattern that matches very well to the GPU features. Thus, an implementation of this algorithm is presented here for solving large problem sizes, i.e., larger than CUDA GPU's shared memory capacity, in a Multiple-GPU environment. Also, small and medium problem sizes can take advantage of this implementation. Finally, this proposal has been tuned in order to obtain the maximum performance, resulting in a compact implementation that outperforms the CUSPARSE library (6.33x).
Benchmarking is a way to study the performance of new architectures and parallel programming frameworks. Well-established benchmark suites such as the NAS parallel Benchmarks (NPB) comprise legacy codes that still lac...
详细信息
Benchmarking is a way to study the performance of new architectures and parallel programming frameworks. Well-established benchmark suites such as the NAS parallel Benchmarks (NPB) comprise legacy codes that still lack portability to C++ language. As a consequence, a set of high-level and easy-to-use C++ parallel programming frameworks cannot be tested in NPB. Our goal is to describe a C++ porting of the NPB kernels and to analyze the performance achieved by different parallel implementations written using the Intel TBB, OpenMP and FastFlow frameworks for Multi-Cores. The experiments show an efficient code porting from Fortran to C++ and an efficient parallelization on average.
One method of developing the architecture of a software and hardware complex for parallel implementation of complex queries, including the Join operation, to databases containing large and extra-large amounts of infor...
详细信息
ISBN:
(纸本)9781728103396
One method of developing the architecture of a software and hardware complex for parallel implementation of complex queries, including the Join operation, to databases containing large and extra-large amounts of information is considered. The method is based on the use of the principle of symmetrically horizontally distribution of data and two modern approaches to the organization of parallel data processing on heterogeneous computing facilities combined into a local or global network. A solution to the problem through the use of modern grid technology and container technology.
Language Modeling (LM) is a subtask in Natural Language processing (NLP), and the goal of LM is to build a statistical language model that can learn and estimate a probability distribution of natural language over sen...
详细信息
ISBN:
(纸本)9781538658895
Language Modeling (LM) is a subtask in Natural Language processing (NLP), and the goal of LM is to build a statistical language model that can learn and estimate a probability distribution of natural language over sentences of terms. Recently, many recurrent neural networkbased LM, a type of deep neural network for dealing with sequential data, have been proposed and achieved remarkable results. However, they only rely upon the analysis on the words occurred in the sentences even though every sentence contains various useful morphological information, such as Part-of-Speech (POS) tag that is necessary for constituting a sentence and can be used for an analysis as a feature. Although morphological information can be useful for LM, using that information as the input data to neural networkbased LM is not straightforward because adding features between words as a one-dimensional array can cause the vanishing gradient problem by increasing the time steps of recurrent neural network. In order to solve this problem, in this paper, we propose a CNN-LSTM based language model that deals with textual data regarding a multi-dimensional data with respect to the input of the network. To train this multi- dimensional input to Long-Short Term Memory (LSTM), we use a convolutional neural network (CNN) with a 1x1 filter for dimensionality reduction of input data to avoid the vanishing gradient problem by decreasing the time step between input words. In addition, our approach that uses multi-dimension data reduced by CNN can be used as a plugin with many customized LSTM based LM. On the Penn Treebank corpus, our model has shown improvement of the perplexity with not only vanilla LSTM but customized LSTM models.
暂无评论