Understanding the temporal evolution of features of interest requires the ability to: (i) extract features from each snapshot;(ii) correlate them over time;and (iii) understand the resulting tracking graph. This paper...
详细信息
ISBN:
(纸本)9781467385176
Understanding the temporal evolution of features of interest requires the ability to: (i) extract features from each snapshot;(ii) correlate them over time;and (iii) understand the resulting tracking graph. This paper provides new solutions for the last two challenges in the context of large-scale turbulent combustion simulations. In particular, we present a simple and general algorithm to correlate hierarchical features, embedded in time-dependent surfaces. This, for the first time, provides a parameter independent approach to track embedded features. Furthermore, we provide a new technique to adaptively change feature parameters over time to both: alleviate artifacts due to insufficient temporal resolution as well as to simplify the resulting tracking graphs to promote new scientific insights. Our solutions are integrated into a general and flexible analysis environment that allows users to interactively explore the spatio-temporal behavior of large-scale simulations. We demonstrate the results using the analysis of extinction holes in turbulent combustion as primary case study and a number of other applications to illustrate the generality of the approach.
The principal goal of visualization is to create a visual representation of complex information and largedatasets in order to gain insight and understanding. Our current research focuses on methods for handling uncer...
详细信息
ISBN:
(纸本)9781424420025
The principal goal of visualization is to create a visual representation of complex information and largedatasets in order to gain insight and understanding. Our current research focuses on methods for handling uncertainty stemming from data acquisition and algorithmic sources. Most visualization methods, especially those applied to 3D data, implicitly use some form of classification or segmentation to eliminate unimportant regions and illuminate those of interest. The process of classification is inherently uncertain;in many cases the source data contains error and noise, data transformations such as filtering can further introduce and magnify the uncertainty. More advanced classification methods rely on some sort of model or statistical method to determine what is and is not a feature of interest. While these classification methods can model uncertainty or fuzzy probabilistic memberships, they typically only provide discrete, maximum a-posteriori memberships. It is vital that visualization methods provide the user access to uncertainty in classification or image generation if the results of the visualization are to be trusted.
The discrete nature of categorical data makes it a particular challenge for visualization. Methods that work very well for continuous data are often hardly usable with categorical dimensions. Only few methods deal pro...
详细信息
ISBN:
(纸本)078039464X
The discrete nature of categorical data makes it a particular challenge for visualization. Methods that work very well for continuous data are often hardly usable with categorical dimensions. Only few methods deal properly with such data, mostly because of the discrete nature of categorical data, which does not translate well into the continuous domains of space and color. Parallel Sets is a new visualization method that adopts the layout of parallel coordinates, but substitutes the individual data points by a frequency-based representation. This abstracted view, combined with a set of carefully designed interactions, supports visual dataanalysis of large and complex data sets. The technique allows efficient work with meta data, which is particularly important when dealing with categorical datasets. By creating new dimensions from existing ones, for example, the user can filter the data according to his or her current needs. We also present the results from an interactive analysis of CRM data using Parallel Sets. We demonstrate how the flexible layout eases the process of knowledge crystallization, especially when combined with a sophisticated interaction scheme.
The smart grid applications requires real time analysis, response within the order of milliseconds and high-reliability because of the mission critical structure of the power grid system. The only way to satisfy these...
详细信息
Common security tools generate a lot of data suitable for further analysis. However, the raw form of the data is often too complex and useful information gets lost in a large volume of records. In this paper, we propo...
详细信息
ISBN:
(纸本)9783903176157
Common security tools generate a lot of data suitable for further analysis. However, the raw form of the data is often too complex and useful information gets lost in a large volume of records. In this paper, we propose a system for visualization of the data generated by a DNS firewall and outline a process of visually emphasizing information important to incident handlers. Our prototype suggests that such visualization is possible, keeping the balance between the amount of displayed information and the level of detail.
In this work, we present an effective and scalable system for multivariate volume datavisualization and analysis with a novel Transfer Function (TF) interface design that tightly couples parallel coordinates plots (P...
详细信息
The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted ...
详细信息
The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.
This paper discusses how to better support collaborative work for large scientific projects using visualization. Particular considerations are given to knowledge sharing and the social aspect of collaboration. The goa...
详细信息
ISBN:
(纸本)9781424422487
This paper discusses how to better support collaborative work for large scientific projects using visualization. Particular considerations are given to knowledge sharing and the social aspect of collaboration. The goal is to provide a roadmap for creating next-generation collaborative technologies with visual and social augmentation.
Trelliscope emanates from the Trellis Display framework for visualization and the Divide and Recombine (D&R) approach to analyzing large complex data. In Trellis, the data are broken up into subsets, a visualizati...
详细信息
Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suff...
详细信息
ISBN:
(纸本)9798331516932;9798331516925
Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for largedatavisualization especially in environments outside of high-performance computing. To perform DR on largedata sets, we propose the use of out-of-sample extensions. Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on largedata, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.
暂无评论