Approximate pattern discovery is one of the fundamental and challenging problems in computer science. Fast and high performance algorithms are highly demanded in many applications in bioinformatics and computational m...
详细信息
ISBN:
(纸本)9781479984909
Approximate pattern discovery is one of the fundamental and challenging problems in computer science. Fast and high performance algorithms are highly demanded in many applications in bioinformatics and computational molecular biology, which are the domains that are mostly and directly benefit from any enhancement of pattern matching theoretical knowledge and solutions. This paper proposed an efficient GPU implementation of fuzzified Aho-Corasick algorithm using Levenshtein method and N-gram technique as a solution for approximate pattern matching problem.
Sketch-based methods are widely used in highspeed network monitoring applications. In this paper, we present a parallel implementation of sketch computations using Open Computing Language (OpenCL) on Graphics Processi...
详细信息
Many applications need transaction-consistent pictures of past states of the database. These applications use the commit time of the transaction to timestamp the data. When a transaction is distributed, the cohorts mu...
详细信息
Many applications need transaction-consistent pictures of past states of the database. These applications use the commit time of the transaction to timestamp the data. When a transaction is distributed, the cohorts must vote on a commit time and the coordinator must choose a commit time based on the votes of the cohorts. This implies that timestamps are applied after commit. Until the timestamps are on all the records, one must keep a table of all the committed transaction identifiers and their commit times. The main problem solved is that of determining when all timestamps corresponding to a given transaction have been placed in the records so that a committed transaction entry can be erased from the table. This information must be stable. Logging and recovery details are included.
parallel and distributed computing has become a necessity when it comes to high-performance computing, artificial intelligence, big data analytics, machine learning, deep learning, signal processing and bioengineering...
详细信息
distributed and parallel algorithms have attracted a vast amount of interest and research in recent decades, to handle large-scale data set in real-world applications. In this paper, we focus on a parallel implementat...
详细信息
The HLA-RTI provides a general-purpose network communications mechanism for distributed Virtual Environments (DVE) but does not provide a stream transmission mechanism directly. This paper introduces a solution for st...
详细信息
DIME (distributed Irregular Mesh Environment) is a user environment written in C for manipulation of an unstructured triangular mesh in two dimensions. The mesh is distributed among the separate memories of the proces...
详细信息
Image denoising that is a kind of basal pretreatment can improve visual effect of original images. Rough set is a new mathematical tool to deal with problems on vagueness and uncertainty. It is regarded as a soft comp...
详细信息
ISBN:
(纸本)9780769529097
Image denoising that is a kind of basal pretreatment can improve visual effect of original images. Rough set is a new mathematical tool to deal with problems on vagueness and uncertainty. It is regarded as a soft computation method. Being the same as fuzzy method, genetic algorithm, neural networks, it is an intellective information processing method. Rough set applies to gray image, and a new image median denoising algorithm based on rough set (RSMD) is proposed in this paper. Comparing experiments are done using classical median denoising (CAM) and the denoising algorithm presented in this paper. Experimental results show that the new algorithm denoising outperforms the classical median denoising.
A large amount of stream data are generated from some devices such as sensors and cameras. These stream data should be timely processed for real-time applications to satisfy the data latency requirements. To process a...
详细信息
ISBN:
(纸本)9781728172835
A large amount of stream data are generated from some devices such as sensors and cameras. These stream data should be timely processed for real-time applications to satisfy the data latency requirements. To process a large amount of data in a short time, utilizing stream processing on edge/fog computing is a promising technology. In the stream processing system, a snapshot of processes and replications of the stream data are stored on another server, and when server fault or load spike of server occurs, the process is continued by using the stored snapshots and replicated data. Therefore, with edge computing environment, which has low bandwidth resource, process recovery takes a long time due to the transferring of restored data. In this paper, we propose a stream processing system architecture to decide servers to store snapshots and replication data and redeploy processes by considering the load of each server and the network bandwidth. We also propose a semi-optimal algorithm that reduces the computational cost by appropriately sorting servers and tasks according to the network bandwidth and server load. The algorithm can find a solution over 1000 times faster than the Coin or Branch and Cut (CBC) solver.
暂无评论