The proceedings contain 17 papers. The topics discussed include: visualization challenges for a new cyberpharmaceutical computing paradigm;Delaunay based shape reconstruction from largedata;parallel point reprojectio...
ISBN:
(纸本)0780372239
The proceedings contain 17 papers. The topics discussed include: visualization challenges for a new cyberpharmaceutical computing paradigm;Delaunay based shape reconstruction from largedata;parallel point reprojection;visualizing ocean currents with color and dithering;parallel lagrangian visualization applied to natural convective flows;real-time out-of-core visualization of particle traces;a unified infrastructure for parallel out-of-core isosurface and volume rendering of unstructured grids;parallel view-dependent isosurface extraction using multi-pass occlusion culling;parallel rendering with k-way replication;sort-last parallel rendering for viewing extremely largedata sets on tile displays;multiresolution view-dependent splat based volume rendering of large irregular data;parallelizing a high accuracy hardware-assisted volume renderer for meshes with arbitrary polyhedra;and scalable interactive volume rendering using off-the-shelf components.
The proceedings contain 14 papers. The topics discussed include: visibility-based prefetching for interactive out-of-core rendering;efficient parallel out-of-core isosurface extraction;a parallel framework for simplif...
ISBN:
(纸本)078038122X
The proceedings contain 14 papers. The topics discussed include: visibility-based prefetching for interactive out-of-core rendering;efficient parallel out-of-core isosurface extraction;a parallel framework for simplification of massive meshes;from cluster to wail with VTK;SLIC: scheduled linear image compositing for parallel volume rendering;sort-first, distributed memory parallelvisualization and rendering;parallel cell projection rendering of adaptive mesh refinement data;a multi-layered image cache for scientific visualization;real-time volume rendering of time-varying data using a fragment-shader compression approach;distributed interactive ray tracing of dynamic scenes;distributed interactive ray tracing for large volume visualization;a PC cluster system for simultaneous interactive volumetric modeling and visualization;the feature tree: visualizing feature tracking in distributed AMR datasets;and improving occlusion query efficiency with occupancy maps.
With the advancement of satellite communication technology, the maritime Internet of Things (IoT) has made significant progress. As a result, vast amounts of Automatic Identification System (AIS) data from global vess...
详细信息
With the advancement of satellite communication technology, the maritime Internet of Things (IoT) has made significant progress. As a result, vast amounts of Automatic Identification System (AIS) data from global vessels are transmitted to various maritime stakeholders through Maritime IoT systems. AIS data contains a large amount of dynamic and static information that requires effective and intuitive visualization for comprehensive analysis. However, two major deficiencies challenge current visualization models: a lack of consideration for interactions between distant pixels and low efficiency. To address these issues, we developed a large-scale vessel trajectories visualization algorithm, called the Non-local Kernel Density Estimation (NLKDE) algorithm, which incorporates a non-local convolution process. It accurately calculates the density distribution of vessel trajectories by considering correlations between distant pixels. Additionally, we implemented the NLKDE algorithm under a graphics Processing Unit (GPU) framework to enable parallel computing and improve operational efficiency. Comprehensive experiments using multiple vessel trajectory datasets show that the NLKDE algorithm excels in vessel trajectory density visualization tasks, and the GPU-accelerated framework significantly shortens the execution time to achieve real-time results. From both theoretical and practical perspectives, GPU-accelerated NLKDE provides technical support for real-time monitoring of vessel dynamics in complex water areas and contributes to constructing maritime intelligent transportation systems. The code for this paper can be accessed at: https://***/maohliang/GPU-NLKDE.
Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is the...
详细信息
Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as query structures for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.
The promotion of large-scale applications of reinforcement learning (RL) requires efficient training computation. While existing parallel RL frameworks encompass a variety of RL algorithms and parallelization techniqu...
详细信息
The promotion of large-scale applications of reinforcement learning (RL) requires efficient training computation. While existing parallel RL frameworks encompass a variety of RL algorithms and parallelization techniques, the excessively burdensome communication frameworks hinder the attainment of the hardware's limit for final throughput and training effects on a single desktop. In this article, we propose Spreeze, a lightweight parallel framework for RL that efficiently utilizes a single desktop hardware resource to approach the throughput limit. We asynchronously parallelize the experience sampling, network update, performance evaluation, and visualization operations, and employ multiple efficient data transmission techniques to transfer various types of data between processes. The framework can automatically adjust the parallelization hyperparameters based on the computing ability of the hardware device in order to perform efficient large-batch updates. Based on the characteristics of the "Actor-Critic" RL algorithm, our framework uses dual GPUs to independently update the network of actors and critics in order to further improve throughput. Simulation results show that our framework can achieve up to 15,000 Hz experience sampling and 370,000 Hz network update frame rate using only a personal desktop computer, which is an order of magnitude higher than other mainstream parallel RL frameworks, resulting in a 73% reduction of training time. Our work on fully utilizing the hardware resources of a single desktop computer is fundamental to enabling efficient large-scale distributed RL training.
This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale com...
详细信息
This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time. These edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we develop a workflow to fi x ex trema an d in tegral lines alternatively until convergence within finite iterations. We accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.
The proceedings contain 12 papers. The topics discussed include: parallel lumigraph reconstruction;parallelvisualization of large-scale aerodynamics calculations: a case study on the Cray T3E;hybrid scheduling for pa...
ISBN:
(纸本)1581132379
The proceedings contain 12 papers. The topics discussed include: parallel lumigraph reconstruction;parallelvisualization of large-scale aerodynamics calculations: a case study on the Cray T3E;hybrid scheduling for parallel rendering using coherent ray tasks;exploiting frame coherence with the temporal depth buffer in a distributed computing environment;transparent distributed processing for rendering;web based collaborative visualization of distributed and parallel simulation;scalable distributed visualization using off-the-shelf components;and interactive volume segmentation with the PAVLOV Architecture.
暂无评论