In this paper, we present Hi-D maps, a novel method for the visualization of multi-dimensional categorical data. Our work addresses the scarcity of techniques for visualizing a large number of data-dimensions in an ef...
详细信息
ISBN:
(纸本)9781728149417
In this paper, we present Hi-D maps, a novel method for the visualization of multi-dimensional categorical data. Our work addresses the scarcity of techniques for visualizing a large number of data-dimensions in an effective and space-efficient manner. We have mapped the full data-space onto a 2D regular polygonal region. The polygon is cut hierarchically with lines parallel to a user-controlled, ordered sequence of sides, each representing a dimension. We have used multiple visual cues such as orientation, thickness, color, countable glyphs, and text to depict cross-dimensional information. We have added interactivity and hierarchical browsing to facilitate flexible exploration of the display: small areas can be scrutinized for details. Thus, our method is also easily extendable to visualize hierarchical information. Our glyph animations add an engaging aesthetic during interaction. Like many visualizations, Hi-D maps become less effective when a large number of dimensions stresses perceptual limits, but Hi-D maps may add clarity before those limits are reached.
This paper presents a framework that fully leverages the advantages of a deferred rendering approach for the interactive visualization of large-scale datasets. Geometry buffers (G-Buffers) are generated and stored in ...
详细信息
ISBN:
(数字)9781728184685
ISBN:
(纸本)9781728184692
This paper presents a framework that fully leverages the advantages of a deferred rendering approach for the interactive visualization of large-scale datasets. Geometry buffers (G-Buffers) are generated and stored in situ, and shading is performed post hoc in an interactive image-based rendering front end. This decoupled framework has two major advantages. First, the G-Buffers only need to be computed and stored once-which corresponds to the most expensive part of the rendering pipeline. Second, the stored G-Buffers can later be consumed in an image-based rendering front end that enables users to interactively adjust various visualization parameters-such as the applied color map or the strength of ambient occlusion-where suitable choices are often not known a priori. This paper demonstrates the use of Cinema Darkroom on several real-world datasets, highlighting CD's ability to effectively decouple the complexity and size of the dataset from its visualization.
The Morse-Smale complex is a well studied topological structure that represents the gradient flow behavior of a scalar function. It supports multi-scale topological analysis and visualization of large scientific data....
详细信息
ISBN:
(数字)9781728180144
ISBN:
(纸本)9781728180151
The Morse-Smale complex is a well studied topological structure that represents the gradient flow behavior of a scalar function. It supports multi-scale topological analysis and visualization of large scientific data. Its computation poses significant algorithmic challenges when considering large scale data and increased feature complexity. Several parallel algorithms have been proposed towards the fast computation of the 3D Morse-Smale complex. The non-trivial structure of the saddle-saddle connections are not amenable to parallel computation. This paper describes a fine grained parallel method for computing the Morse-Smale complex that is implemented on a GPU. The saddle-saddle reachability is first determined via a transformation into a sequence of vector operations followed by the path traversal, which is achieved via a sequence of matrix operations. Computational experiments show that the method achieves up to 7 × speedup over current shared memory implementations.
This short paper considers time-to-solution for two in situ visualization paradigms: in-line and in-transit. It is a follow-on work to two previous studies. The first study [10] considered time-to-solution (wall clock...
详细信息
ISBN:
(数字)9781728184685
ISBN:
(纸本)9781728184692
This short paper considers time-to-solution for two in situ visualization paradigms: in-line and in-transit. It is a follow-on work to two previous studies. The first study [10] considered time-to-solution (wall clock time) and total cost (total node seconds incurred) for a single visualization algorithm (isosurfacing). The second study [11] considered only total cost and added a second algorithm (volume rendering). This short paper completes the evaluation, considering time-to-solution for both algorithms. In particular, it extends the first study by adding additional insights from including a second algorithm at larger scale and by doing more extended and formal analysis regarding time-to-solution. Further, it complements the second study as the best in situ configuration to choose can vary when considering time-to-solution over cost. It also makes use of the same data corpus used in the second study, although that data corpus has been refactored with time-to-solution in mind.
Today there are abounding collected data in cases of various diseases in medical sciences. Physicians can access new findings about diseases and procedures in dealing with them by probing these data. Clinical data is ...
详细信息
ISBN:
(纸本)9781728128504
Today there are abounding collected data in cases of various diseases in medical sciences. Physicians can access new findings about diseases and procedures in dealing with them by probing these data. Clinical data is a collection of large and complex datasets that commonly appear in multidimensional data formats. It has been recognized as a big challenge in modern data analysis tasks. Therefore, there is an urgent need to find new and effective techniques to deal with such huge datasets. This paper presents an application of a new visual data mining platform for visual analysis of the stroke data for predicting the levels of risk to those people who have the similar characteristics of the stroke patients. The visualization platform uses a hierarchical clustering algorithm to aggregate the data and map coherent groups of data-points to the same visual elements - curved 'super-polylines' that significantly reduces the visual complexity of the visualization. On the other hand, to enable users to interactively manipulate data items (super-polylines) in the parallel coordinates geometry through the mouse rollover and clicking, we created many 'virtual nodes' along the multi-axis of the visualization based on the hierarchical structure of the value range of selected data attributes. The experimental result shows that we can easily verily research hypothesis and reach to the conclusion of research questions through human-data & human-algorithm interactions by using this visual platform with a fully transparency manner of data processing.
Modern real-time visualizations of large-scale datasets require constant high frame rates while their datasets might exceed the available graphics memory. This requires sophisticated upload strategies from host memory...
详细信息
Because of the spatial separation of high performance compute resources and immersive visualization systems, their combined use requires remote visualization. Remote rendering incurs increased latency from user intera...
详细信息
3D particle data is relevant for a wide range of scientific domains, from molecular dynamics to astrophysics. Simulations in these domains can produce datasets containing millions or billions of particles and renderin...
详细信息
ISBN:
(数字)9781728184685
ISBN:
(纸本)9781728184692
3D particle data is relevant for a wide range of scientific domains, from molecular dynamics to astrophysics. Simulations in these domains can produce datasets containing millions or billions of particles and rendering needs to be in high quality and interactive to support the scientists in exploring and understanding the structure of their data. One general baseline approach is to represent particles as spheres and employ ray tracing as a rendering technique. However, ray tracing requires the data to be organized in acceleration data structures like bounding volume hierarchies (BVH) to achieve interactive frame rates. Modern GPUs provide hardware acceleration for traversing such data structures but are more limited in memory than CPUs. In this paper, we evaluate different acceleration data structures for sphere-based datasets, including particle kD trees, with respect to their scalability regarding both memory size and speed, and we analyze how these data structures can benefit from hardware acceleration. We show that a bricking of data results in the most effective BVH, both fast to traverse utilizing hardware acceleration and with a reasonably small memory footprint. Additionally, we present a hybrid acceleration data structure that has negligible memory overhead and still ensures reasonable traversal speed. Based on our results, visualization tools and APIs for the ray tracing can provide overall better performance by adapting to the needs of particle-centric application scenarios.
To address the need of highly efficient and scalable parallel flow visualization methods, we developed a flow visualization system for large unstructured simulation data using parallel 3D line integral convolution (LI...
详细信息
We present the design and evaluation of an integrated problem solving environment for cancer therapy analysis. The environment intertwines a statistical martingale model and a K Nearest Neighbor approach with visual e...
详细信息
We present the design and evaluation of an integrated problem solving environment for cancer therapy analysis. The environment intertwines a statistical martingale model and a K Nearest Neighbor approach with visual encodings, including novel interactive nomograms, in order to compute and explain a patient's probability of survival as a function of similar patient results. A coordinated views paradigm enables exploration of the multivariate, heterogeneous and few-valued data from a large head and neck cancer repository. A visual scaffolding approach further enables users to build from familiar representations to unfamiliar ones. Evaluation with domain experts show how this visualization approach and set of streamlined workflows enable the systematic and precise analysis of a patient prognosis in the context of cohorts of similar patients. We describe the design lessons learned from this successful, multi-site remote collaboration.
暂无评论