Phonocardiogram known as PCG plays a significant role in the early diagnosis of cardiac abnormalities. Phonocardiogram can be used as initial diagnostics tool in remote applications due to its simplicity and cost effe...
详细信息
the proceedings contain 19 papers. the special focus in this conference is on Supercomputing. the topics include: High performance open source lagrangian oil spill model;fast random cactus graph generation;theoretical...
ISBN:
(纸本)9783030104474
the proceedings contain 19 papers. the special focus in this conference is on Supercomputing. the topics include: High performance open source lagrangian oil spill model;fast random cactus graph generation;theoretical calculation of photoluminescence spectrum using DFT for double-wall carbon nanotubes;computational study of aqueous solvation of vanadium (V) complexes;3D image reconstruction system for cancerous tumors analysis based on diffuse optical tomography with blender;sea-surface temperature spatiotemporal analysis for the gulf of California, 1998–2015: Regime change simulation;use of high performance computing to simulate cosmic-ray showers initiated by high-energy gamma rays;data augmentation for deep learning of non-mydriatic screening retinal fundus images;decision support system for urban flood management in the cazones river basin;automatic code generator for a customized high performance microprocessor simulator;use of containers for high-performance computing;generic methodology for the design of parallelalgorithms based on pattern languages;a parallelized iterative closest point algorithm for 3D view fusion;Evaluation of orangeFS as a tool to achieve a high-performance storage and massively parallelprocessing in HPC cluster;many-core parallel algorithm to correct the gaussian noise of an image;amdahl’s law extension for parallel program performance analysis on intel turbo-boost multicore processors;Traffic sign distance estimation based on stereo vision and GPUs.
Withthe growing interest in big data, speed-up techniques for clustering are required. the density-based spatial clustering of applications with noise (DBSCAN) has been well known in database domains. Since the DBSCA...
详细信息
ISBN:
(纸本)9781728173986
Withthe growing interest in big data, speed-up techniques for clustering are required. the density-based spatial clustering of applications with noise (DBSCAN) has been well known in database domains. Since the DBSCAN algorithm was first proposed, several speed-up methods have been introduced. In the previous work, cell-based DBSCAN as a fast DBSCAN algorithm that divides the whole dataset into smaller cells and connects them to form clusters was proposed. In this study we propose a novel clustering algorithm called anytime algorithm for cell-based DBSCAN. the proposed algorithm connects some randomly selected cells and calculates the clustering result at high speed. Next, it repeats this process, which improves the accuracy of clustering, thereby yielding the precise results. Experimental results demonstrated that the proposed algorithm can calculate the clustering results with high accuracy at high speed.
the main objective of the proposed work is to develop a new Data comparator which gives an economical solution for sorting / Rank ordering networks on the basis of speed, power, and area. the proposed work comprises a...
详细信息
Withthe continuous development of the Internet, there are many web services withthe same functional attributes but different functional attributes. It is urgent to find a web service that can satisfy itself quickly ...
详细信息
ISBN:
(纸本)9781728111902
Withthe continuous development of the Internet, there are many web services withthe same functional attributes but different functional attributes. It is urgent to find a web service that can satisfy itself quickly and efficiently from the massive web service data. this paper improves the traditional Skyline algorithm, divides the web service data set into regions, greatly reduces the data points without dominance, and saves memory usage. the improved Skyline algorithm can significantly improve the speed of Web service selection. However, the improved Skyline algorithm will still have insufficient computing resources when processing massive Web service data, resulting in a significant decrease in computing speed and even computer jam. In view of the above situation, this paper will parallelize the improved Skyline algorithm and parallelize the improved Skyline algorithm through the Spark platform. Experiments show that the parallelized Skyline algorithm can better handle massive Web service data.
the low-level data processing of the Cherenkov Telescope Array (CTA) and indeed all other existing Cherenkov Telescopes can be broken into four general steps: 1) the processing of air-shower event image time-series, 2...
详细信息
the low-level data processing of the Cherenkov Telescope Array (CTA) and indeed all other existing Cherenkov Telescopes can be broken into four general steps: 1) the processing of air-shower event image time-series, 2) the stereo reconstruction of the incident air showers, and 3) the discrimination of gamma-ray induced showers from those from cosmic rays 4) the determination of the overall system response. the final output for science users is a list of reconstructed gamma-ray-like events and their associated parameters, along with a set of instrumental response functions needed for doing astrophysics. We present a python-based framework, ctapipe, for writing the algorithms required for these processing steps along with a reference prototype pipeline. the code is written with a focus on simplicity and usability by developers with a diverse range of skill sets, and leverages existing code from the science community (AstroPy, SciPy/NumPy, SciKit-Learn, etc). this concept is intended to be a prototype for the final CTA low-level data processing pipeline, allowing physicists to quickly explore low-level Cherenkov telescope data and develop new algorithms. thanks to the framework modularity, computer engineers and data scientists will be able to simultaneously optimize the algorithms and parallelize them using modern computing and big-data architectures to support the high data volumes of CTA.
this paper describes the application of hierarchical temporal memory (HTM) to the task of anomaly detection in human motions. A number of model experiments with well-known motion dataset of Carnegie Mellon University ...
详细信息
ISBN:
(纸本)9783319993164;9783319993157
this paper describes the application of hierarchical temporal memory (HTM) to the task of anomaly detection in human motions. A number of model experiments with well-known motion dataset of Carnegie Mellon University have been carried out. An extended version of HTM is proposed, in which feedback on the movement of the sensor's focus on the video frame is added, as well as intermediate processing of the signal transmitted from the lower layers of the hierarchy to the upper ones. By using elements of reinforcement learning and feedback on focus movement, the HTM's temporal pooler includes information about the next position of focus, simulating the human saccadic movements. processingthe output of the temporal memory stabilizes the recognition process in large hierarchies.
In this paper, we propose a Zynq-based defogging algorithm for low-light images, firstly, we explain the background and significance of this topic, then we study several algorithms in Retinex, analyze and compare them...
详细信息
ISBN:
(数字)9781728152448
ISBN:
(纸本)9781728152455
In this paper, we propose a Zynq-based defogging algorithm for low-light images, firstly, we explain the background and significance of this topic, then we study several algorithms in Retinex, analyze and compare them to get the ideal algorithm. the VDMA module sends data to the 7-inch LCD LCD module (AN070) for control display; then use the mainstream pipeline design, parallelprocessing for the best FPGA to efficiently implement the Retinex-based image de-fogging algorithm porting, which mainly uses Vivado HLS to generate Retinex algorithm IP cores, including From RGB to HSV color space conversion, linear stretching of the S-space part of the attribute parameters and V-space part of the logarithmic operation, Gaussian fuzzy and other related algorithm modules, and finally download the debugging, optimization and summary of the system de-mist effect of various aspects of the algorithm.
Withthe rapid development of Internet and the continuous rise of network users, the network traffic in various regions is increasing rapidly. In the face of a large number of high speed and high throughput of the net...
详细信息
ISBN:
(纸本)9781538694039
Withthe rapid development of Internet and the continuous rise of network users, the network traffic in various regions is increasing rapidly. In the face of a large number of high speed and high throughput of the network environment, traditional packet capture methods and processing capabilities cannot reach the corresponding speed, which results in severe packet loss. this paper focuses on a high-performance packet acquisition and distribution method to break through the performance bottleneck of universal servers and network cards. this paper studies a packet capture method based on DPDK platform, and uses the processing of hash value in RSS to improve the efficiency of data packet distribution, which realizes the process from performance acquisition to efficiently multi-core parallelprocessing. this method can effectively reduce packet loss and improve the data packet processing rate. It can also reduce resource waste and network overhead for traffic capture and distribution. Preliminary experiments show that DPDK-based traffic processing has obvious advantages over PF-RING and Netmap in data processing speed.
Principal component analysis (PCA) is a classical supervised linear dimension reduction algorithm in machine learning. PCA solves the optimal dimension reduction direction by maximizing the variance of projection poin...
详细信息
ISBN:
(数字)9781728152448
ISBN:
(纸本)9781728152455
Principal component analysis (PCA) is a classical supervised linear dimension reduction algorithm in machine learning. PCA solves the optimal dimension reduction direction by maximizing the variance of projection points. the classical methods to realize optimization are gradient ascend method or stochastic gradient ascend method depending on gradient information. However, the above methods are easy to fall into the trap of local extremum, which is not conducive to global optimization. In this paper, the quantum whale algorithm is introduced into the optimization part of PCA, which enhances the parallelism and globality of optimization, and gets rid of the dependence on gradient.
暂无评论