Deep learning is contributing to the high level of services to the healthcare sector. As the digital medical data is increasing exponentially with time, early detection and prediction of diseases are becoming more eff...
详细信息
ISBN:
(纸本)9781728106489
Deep learning is contributing to the high level of services to the healthcare sector. As the digital medical data is increasing exponentially with time, early detection and prediction of diseases are becoming more efficient because of the deep learning techniques which reduce the fatality rate to a great extent. The main focus of this paper is to provide the comprehensive review of deep learning in the domain of medical imageprocessing and analysis. We have demonstrated the use of new deep learning architectures in oncology for the prediction of different types of cancer like the brain, lung, skin, etc. The state-of-the-art architectures effectively carry out analysis of 2D and 3D medical images to make the diagnosis of patients faster and more accurate. The use of popular approaches in machine learning such as ensemble and transfer learning with fine-tuning of parameters improve the performance of the deep neural networks in medical image analysis. The existing deep networks urge the new image classification network called Capsule Network (CapsNet) to make the classification and detection comparatively better. The equivariance characteristics of CapsNet make it more influential as it discourages the effect of any structural invariance of an input image on the network.
Today's internet world impose a trade-off between Peta-byte to Exa-byte being created in digital computer world attributable enormous volume of unstructured datasets being generating from diverse social sites, IOT...
详细信息
ISBN:
(纸本)9789813296909;9789813296893
Today's internet world impose a trade-off between Peta-byte to Exa-byte being created in digital computer world attributable enormous volume of unstructured datasets being generating from diverse social sites, IOT, Google, Twitter, Yahoo, monitoring surroundings through sensors, etc., is big data (BD). Because second to second doubles the datasets volume size but the shortage of smooth dynamic processing, analysis and scalability techniques. Because the recent high-speed decade we applied only extant methods and common tools about the gigabyte data process and perform computations on whole world huge data. Apache open free source Hadoop is the latest BD weapon can process zetta-byte dimensions of databases by its most developed and popular components as HDFS and map reduce (MR), to get done excellent storage features magnificent and reliable processing on zetta-byte of datasets. MR likes more famous software, popular framework for handling BD existing issues with full parallel, highly distributed, and most scalable manner. Despite, Hadoop, map and reduces tasks have more limitations like poor allocating custom resources, stream way processing, shortage of latency, the deficit of efficient performance, imperfection of optimization, the real-time trend of computations and diverse logical elucidation. We significant most modern progressive features computing procedures. This examination paper shows Apache fastest spark tool, world latest and fastest tool is apache storm has efficient frameworks to conquer those limitations.
The proceedings contain 51 papers. The topics discussed include: synthesis of hardware components for vertical-group parallel neural networks;method of developing the behavior models in form of states diagram for comp...
ISBN:
(纸本)9786176078159
The proceedings contain 51 papers. The topics discussed include: synthesis of hardware components for vertical-group parallel neural networks;method of developing the behavior models in form of states diagram for complex information systems;distance measurement using radio frequency identification technology;intelligent advisory systems and information technology support for decision making in tourism;morphological processing of videostream frames binary masks;algorithms for software clustering and modularization;cryptographic information protection using extended finite fields;image segmentation via X-means under overlapping classes;methods of protection document formed from latent element located by fractals;e-science: new paradigms, system integration and scientific research organization;flexible 2D membership functions for images filtering using fuzzy peer group approach;and combinatorial mathematical model and decision strategy for one-to-one pickup and delivery problem with 3d loading constraints.
Role-Based Access Control (RBAC) is a method to manage and con-trol a host in a distributed manner by applying the rules to the users on a host. This paper proposes a rule based intrusion protection system based on RB...
详细信息
Discrete Wavelet Transform (DWT) is widely-used in image and video processing with high computing complexity and regular data flow, which is suitable for the implementation on a Coarse-grained Reconfigurable Architect...
详细信息
Discrete Wavelet Transform (DWT) is widely-used in image and video processing with high computing complexity and regular data flow, which is suitable for the implementation on a Coarse-grained Reconfigurable Architecture (CGRA) owing to its rich parallel computing resources. In this article, the two wavelet filters adopted in JPEG2000 image standard, 5/3 DWT and 9/7 DWT, were realized on a CGRA platform called Reconfigurable Multimedia System-II (REMUS-II). The result shows that the CGRA-based implementation has advantage in area, power and performance over the state-of the-art GPU including 7800GTX and 9800GTX. The die size and power consumption of REMUS-II is respectively less than 1% and 10% compared to the GPU implementations, whereas the performance speed-up is 92.9x for 9/7 filter compared to GPU 7800GTX and 6.54x for 5/3 filter compared to GPU 9800GTX.
The paper presents solutions for an element of the cluster that will be primarily used as a scalable storage product for a collection of mainframes. Existing storage solutions either employ server attached disks: wher...
详细信息
The paper presents solutions for an element of the cluster that will be primarily used as a scalable storage product for a collection of mainframes. Existing storage solutions either employ server attached disks: where the problem is the number of I/O slots, or are specially designed products where the problem is the lack of standards. The essence of both proposed solutions is the introduction of the file usage locality concept, together with methods and techniques to exploit it. The first solution is based on the standard SMP architecture, while the second one employs the DSM architecture as the basic building block of a cluster product.
Radiography is nowadays a common medical exam, used for diagnosing several diseases, but has the disadvantage of exposing people to a dose of radiation. For this reason, it is important to study methods for reducing s...
详细信息
Radiography is nowadays a common medical exam, used for diagnosing several diseases, but has the disadvantage of exposing people to a dose of radiation. For this reason, it is important to study methods for reducing such dose. In this paper we present a digital X-ray simulation tool that simulates a radiological exam on a virtual patient. The software builds a physically-realistic radiography in real-time thanks to GPGPU programming and CUDA technology. It aims to be used in radiological departments, for testing new dose reduction procedures and training health operators. We validated the software comparing the results with real radiographic images, and we tested it on different graphic cards obtaining running times that are 35 to 250 times faster than the corresponding CPU implementation.
Finding pair wise document relatedness plays an important role in a variety of Natural Language processing problems. Google Trigram Method (GTM) is one of the corpus-based unsupervised method that can be used to captu...
详细信息
Finding pair wise document relatedness plays an important role in a variety of Natural Language processing problems. Google Trigram Method (GTM) is one of the corpus-based unsupervised method that can be used to capture word relatedness and document relatedness. It has been shown that it is possible to apply GTM to construct high quality document relatedness applications. However, there are challenges in implementing GTM for pair-wise document relatedness computation on a large volume of document set given its high computational complexity. This paper presents time and space efficient methods for the computation of pair-wise document relatedness using GTM. In order to improve the performance algorithmic engineering, data structure enhancement, and parallel computing methods are applied. Two parallelmethods are discussed in this paper: shared memory multicore implementation and distributed memory Hadoop implementation. Both parallelmethods provide an order of magnitude improvement in accelerating the pair-wise document relatedness computation using GTM.
A significant part in computational fluid dynamics (CFD) simulations is the solving of large sparse systems of linear equations resulting from implicit time integration of the Reynolds-averaged Navier-Stokes (RANS) eq...
详细信息
ISBN:
(纸本)9783031396977;9783031396984
A significant part in computational fluid dynamics (CFD) simulations is the solving of large sparse systems of linear equations resulting from implicit time integration of the Reynolds-averaged Navier-Stokes (RANS) equations. The sparse linear system solver Spliss aims to provide a linear solver library that, on the one hand, is tailored to these requirements of CFD applications but, on the other hand, independent of the particular CFD solver. Spliss allows leveraging a range of available HPC technologies such as hybrid CPU parallelization and the possibility to offload the computationally intensive linear solver to GPU accelerators, while at the same time hiding this complexity from the CFD solver. This work highlights the steps taken to establish multi-GPU capabilities for the Spliss solver allowing for efficient and scalable usage of large GPU systems. In addition, this work evaluates performance and scalability on CPU and GPU systems using a representative CODA test case as an example. CODA is the CFD software being developed as part of a collaboration between the French Aerospace Lab ONERA, the German Aerospace Center (DLR), Airbus, and their European research partners. CODA is jointly owned by ONERA, DLR and Airbus. The evaluation examines and compares performance and scalability in a strong scaling approach on Nvidia A100 GPUs and the AMD Rome architecture.
A graph is a mathematical abstraction commonly used to represent relationships among a finite set of entities, such as hypertext documents or users in a social network. With the recent explosion of online content, the...
详细信息
A graph is a mathematical abstraction commonly used to represent relationships among a finite set of entities, such as hypertext documents or users in a social network. With the recent explosion of online content, the size and number of available graphs have increased as well, prompting research for efficient and scalable methods to process them in a timely fashion. This paper focuses on the calculation of the diameter of a graph, a well-known and relevant metric whose calculation poses a remarkable computational challenge for large graphs. We selected three algorithms based on two popular computing models: MapReduce and Bulk Synchronous parallel (BSP). Two of the algorithms are based on MapReduce and calculate the exact and an approximated value for the graph diameter. The third algorithm is based on BSP and produces the exact value for the diameter. Our tests show that the approximated MapReduce solution produces the best combination of execution time and scalability, although it is outperformed in some cases by the exact BSP solution.
暂无评论