Br east cancer is the most lethal type of cancer affecting women in the world. To improve the life quality of women, an early detection of this malignancy is always promising because this cancer is one of the cancers ...
详细信息
ISBN:
(纸本)9781509051465
Br east cancer is the most lethal type of cancer affecting women in the world. To improve the life quality of women, an early detection of this malignancy is always promising because this cancer is one of the cancers that can be managed and treated easily if it is detected earlier. Mammography is the standard screening method to diagnose breast cancer. At these days, there are a lot of researches that have been done to improve this screening method by benefiting from the enormous growth of technology. Graphics processing Unit (GPU) is a parallel processor that can divide the complex computations tasks into subtasks and run them concurrently. Many medical imaging modalities offloaded their processing to this processor to help in improving the speed of healthcare systems in order to diagnose the illnesses in real time. this research introduces an acceleration method for the segmentation of the mammography images based on the GPU. In order to provide a better detection for the cancerous tumor, we use a modified version of the most common algorithm for image segmentation, which is the Single Pass Fuzzy C-Means (FCM) algorithm. the approach will be applied to a set of mammogram images to distinguish between malignant and benign cases. Additionally, the system is implemented on GPU parallel processor as well as the traditional CPU in order to compare the performance of both implementations. the performance results are compared according to the execution time and the speedup metrics. the proposed implementation on GPU provides a fine speedup compared to its serial implementation on CPU.
the desktop virtualization environment could propose a solution for the high-definition video processing, which used multi-GPU collaboration and parallel computing. the multi-GPU parallel encoding computing is often i...
详细信息
ISBN:
(纸本)9781509006540
the desktop virtualization environment could propose a solution for the high-definition video processing, which used multi-GPU collaboration and parallel computing. the multi-GPU parallel encoding computing is often implemented by multi-threads mode. Based on the analysis of the GPU multi-level storage structure, and data transmission between CPU and GPU, pinned memory (zero-copy memory) and shared memory approaches are used to shorten the time of encoded video data transmission between devices. In this paper the latest support low-latency remote display's GPU and H.264 compression standard are used for the desktop virtualization environment. the experimental results show that the usage of CPU and the transmission bandwidth has been significantly decreased, and the performance of virtual environments has been improved.
Many big data applications are usually categorized as irregular. Irregular problems feature unpredictable and unstructured properties in terms of the program flow, data access pattern and typically use pointer-based d...
详细信息
ISBN:
(纸本)9783319321523;9783319321516
Many big data applications are usually categorized as irregular. Irregular problems feature unpredictable and unstructured properties in terms of the program flow, data access pattern and typically use pointer-based data structures such as graphs. the problems are data, compute and communication intensive in nature. the algorithms are therefore designed and implemented on high performance architectures. the first stage of the parallel algorithm design is data partitioning. In this stage, the data is sub-divided into equally sized disjoint elements such that the communication volume among the processors is minimized. If the data is represented as a graph, it can be stated as the graph partitioning problem, which is NP-hard. In this work, we consider the metaheuristic, ant brooding algorithm based on larval sorting by ants to solve the graph partitioning problem. the parallel ant brooding algorithm is implemented on a cluster using MIT's Julia language. We test the parallel algorithm on different benchmark and synthetic graphs. We compare our Julia parallel implementation with Julia sequential and C sequential implementations. We found that the performance of Julia is comparable to C with good scalability, and the parallel Julia implementation achieves speedup greater than 1 for a synthetic graph with 200 vertices and 1000 edges.
In this study, we have designed a GPGPU (General-Purpose Graphics processing Unit)-based algorithm for determining the minimum distance from the tip of a CUSA (Cavitron Ultrasonic Surgical Aspirator) scalpel to the cl...
详细信息
Field programmable gate arrays (FPGAs) are fundamentally different to fixed processors architectures because their memory hierarchies can be tailored to the needs of an algorithm. FPGA compilers for high level languag...
详细信息
the algorithm clustering by fast search and find of density peaks shows good efficiency and accuracy, but the space complexity of the algorithm is too high since it has to keep a global distance matrix in memory, so i...
详细信息
ISBN:
(纸本)9781509006540
the algorithm clustering by fast search and find of density peaks shows good efficiency and accuracy, but the space complexity of the algorithm is too high since it has to keep a global distance matrix in memory, so it can hardly process big dataset clustering. To solve this problem, this paper designed a new strategy for the algorithm to search the important quantity delta, by using the new strategy, the space complexity of the algorithm is greatly reduced. And based on that reduction, a corresponding load balanced parallel clustering algorithm was presented in this paper, experimental results show that the parallel algorithm is efficient and scalable.
Associative data mining is the research hotspot in the field of big data, and frequent item sets mining is an important step in the analysis of associative data. this paper focuses on analyzing the frequent item sets ...
详细信息
ISBN:
(纸本)9781509006540
Associative data mining is the research hotspot in the field of big data, and frequent item sets mining is an important step in the analysis of associative data. this paper focuses on analyzing the frequent item sets mining algorithm based on Apriori parallel algorithm. the paper has found two shortages of Apriori parallel algorithm: one is that the key value pair are too many, another is that in the combiner stage, it occupies two much memory. therefore, we propose an optimized algorithm. In the optimization algorithm, candidate item sets and local count information are saved in memory, greatly reducing the number of generated keys. Meanwhile, in the short length frequent item sets mining, the method of reducing the number of scanning transaction data without generating candidate item sets can improve the algorithm efficiency. We do the experiments in the Hadoop platform to testify the performance of the proposed optimized algorithm. the experiments demonstrate that the time and I/O of the optimized algorithm have been improved greatly, compared withthe non-optimized algorithm.
As a routine tool for screening and examination, CT plays an important role in disease detection and diagnosis. Real-time table removal in CT images becomes a fundamental task to improve readability, interpretation an...
详细信息
ISBN:
(数字)9783319483351
ISBN:
(纸本)9783319483351;9783319483344
As a routine tool for screening and examination, CT plays an important role in disease detection and diagnosis. Real-time table removal in CT images becomes a fundamental task to improve readability, interpretation and treatment planning. Meanwhile, it makes data management simple and benefits information sharing and communication in picture archiving and communication system. In this paper, we proposed an automated framework which utilized parallel programming to address this problem. Eight full-body CT images were collected and analyzed. Experimental results have shown that withparallel programming, the proposed framework can accelerate the patient table removal task up to three times faster when it was running on a personal computer with four-core central processing unit. Moreover, the segmentation accuracy reaches 99% of Dice coefficient. the idea behind this approach refreshes many algorithms for real-time medical image processing without extra hardware spending.
Future Advanced Driver Assistance Systems (ADAS) require the continuous computation of detailed maps of the vehicle's environment. Due to the high demand of accuracy and the enormous amount of data to be fused and...
详细信息
ISBN:
(纸本)9781509042708
Future Advanced Driver Assistance Systems (ADAS) require the continuous computation of detailed maps of the vehicle's environment. Due to the high demand of accuracy and the enormous amount of data to be fused and processed, common architectures used today, like single-core processors in automotive Electronic Control Units (ECUs), do not provide enough computing power. Here, emerging embedded multi-core architectures are appealing such as embedded Graphics processing Units (GPUs). In this paper, we (a) identify and analyze common subalgorithms of ADAS algorithms for computing environment maps, such as interval maps, for suitability to be parallelized and run on embedded GPUs. From this analysis, (b) performance models are derived on achievable speedups with respect to sequential single-core CPU implementations. (c) As a third contribution of this paper, these performance models are validated by presenting and comparing a novel parallelized interval map GPU implementation against a parallel occupancy grid map implementation. For both types of environment maps, implementations on an Nvidia Tegra K1 prototype are compared to verify the correctness of the introduced performance models. Finally, the achievable speedups with respect to a single-core CPU solution are reported. these range from 3x to 2 7 5x for interval and grid map computations.
the proceedings contain 57 papers. the special focus in this conference is on Artifical Intelligence, Data Mining, Knowledge Discovery, algorithms for Efficient Data processing and Data Warehousing. the topics include...
ISBN:
(纸本)9783319340982
the proceedings contain 57 papers. the special focus in this conference is on Artifical Intelligence, Data Mining, Knowledge Discovery, algorithms for Efficient Data processing and Data Warehousing. the topics include: Interactive visualization of big data;reduction of readmissions to hospitals based on actionable knowledge discovery and personalization;performing and visualizing temporal analysis of large text data issued for open sources;influence of outliers introduction on predictive models quality;methods for selecting nodes for maximal spread of influence in recommendation services;memetic neuro-fuzzy system with differential optimisation;new rough-neuro-fuzzy approach for regression task in incomplete data;improvement of precision of neuro-fuzzy system by increase of activation of rules;rough sets in multicriteria classification of national heritage monuments;inference rules for fuzzy functional dependencies in possibilistic databases;the evaluation of map-reduce join algorithms;the design of the efficient theta-join in map-reduce environment;non-recursive approach for sort-merge join operation;estimating costs of materialization methods for SQL;performance aspect of the in-memory databases accessed via JDBC;comparison of the behaviour of local databases and databases located in the cloud;scalable distributed two-layer datastore providing data anonymity;coordination of parallel tasks in access to resource groups by adaptive conflictless scheduling;conflictless task scheduling using association rules;distributed computing in monotone topological spaces;new similarity measure for spatio-temporal OLAP queries;enhancing concept extraction from polish texts with rule management and a diversified classification committee for recognition of innovative internet domains.
暂无评论