The discrete nature of categorical data makes it a particular challenge for visualization. Methods that work very well for continuous data are often hardly usable with categorical dimensions. Only few methods deal pro...
详细信息
The discrete nature of categorical data makes it a particular challenge for visualization. Methods that work very well for continuous data are often hardly usable with categorical dimensions. Only few methods deal properly with such data, mostly because of the discrete nature of categorical data, which does not translate well into the continuous domains of space and color. parallel sets is a new visualization method that adopts the layout of parallel coordinates, but substitutes the individual data points by a frequency based representation. This abstracted view, combined with a set of carefully designed interactions, supports visual data analysis of large and complex data sets. The technique allows efficient work with meta data, which is particularly important when dealing with categorical datasets. By creating new dimensions from existing ones, for example, the user can filter the data according to his or her current needs. We also present the results from an interactive analysis of CRM data using parallel sets. We demonstrate how the flexible layout eases the process of knowledge crystallization, especially when combined with a sophisticated interaction scheme.
We describe OpenGL multipipe SDK (MPK), a toolkit for scalable parallel rendering based on OpenGL. MPK provides a uniform application programming interface (API) to manage scalable graphics applications across many di...
详细信息
We describe OpenGL multipipe SDK (MPK), a toolkit for scalable parallel rendering based on OpenGL. MPK provides a uniform application programming interface (API) to manage scalable graphics applications across many different graphics subsystems. MPK-based applications run seamlessly from single-processor, single-pipe desktop systems to large multi-processor, multipipe scalable graphics systems. The application is oblivious of the system configuration, which can be specified through a configuration file at run time. To scale application performance, MPK uses a decomposition system that supports different modes for task partitioning and implements optimized CPU-based composition algorithms. MPK also provides a customizable image composition interface, which can be used to apply post-processing algorithms on raw pixel data obtained from executing sub-tasks on multiple graphics pipes in parallel. This can be used to implement parallel versions of any CPU-based algorithm, not necessarily used for rendering. In this paper, we motivate the need for a scalable graphics API and discuss the architecture of MPK. We present MPK's graphics configuration interface, introduce the notion of compound-based decomposition schemes and describe our implementation. We present some results from our work on a couple of target system architectures and conclude with future directions of research in this area.
The proceedings contain 119 papers. The special focus in this conference is on parallel and Distributed Processing and Applications. The topics include: Present and future supercomputer architectures;challenges in P2P...
ISBN:
(纸本)9783540241287
The proceedings contain 119 papers. The special focus in this conference is on parallel and Distributed Processing and Applications. The topics include: Present and future supercomputer architectures;challenges in P2P computing;multihop wireless Ad Hoc networking: current challenges and future opportunities;an inspector-executor algorithm for irregular assignment;multi-grain parallel processing of data-clustering on programmable graphics hardware;a parallel reed-solomon decoder on the imagine stream processor;asynchronous document dissemination in dynamic Ad Hoc networks;location-dependent query results retrieval in a multi-cell wireless;an efficient mobile data mining model;towards correct distributed simulation of high-level petri nets with fine-grained partitioning;m-guard: a new distributed deadlock detection algorithm based on mobile agent technology;meta-based distributed computing framework;locality optimizations for jacobi iteration on distributed parallel;fault-tolerant cycle embedding in the WK-recursive network;RAIDb: redundant array of inexpensive databases;a fault-tolerant multi-agent development framework;a fault tolerance protocol for uploads: design and evaluation;topological adaptability for the distributed token circulation paradigm in faulty environment;adaptive data dissemination in wireless sensor networks;design and analysis of a k-connected topology control algorithm for Ad Hoc networks;on using temporal consistency for parallel execution of real-time queries in wireless sensor systems;cluster-based parallel simulation for large scale molecular dynamics in microscale thermophysics;parallel checkpoint/recovery on cluster of IA-64 computers;an enhanced message exchange mechanism in cluster-based mobile;a scalable low discrepancy point generator for parallel computing;generalized trellis stereo matching with systolic array.
In this paper, we describe a method of adaptive holistic 3D visualization of arbitrarily largedatasets. The method, called AVE, permits to visualize entire datasets by the use of the most appropriate interface select...
详细信息
In this paper, we describe a method of adaptive holistic 3D visualization of arbitrarily largedatasets. The method, called AVE, permits to visualize entire datasets by the use of the most appropriate interface selected automatically from a number of available interfaces. The choice of an interface is based on the dataset properties. Different types of interfaces enable presenting the dataset categorized according to multiple criteria. The method is most suitable for abstract multidimensional datasets. Examples provided in this paper relate to visualization of data generated by indexing search engines.
A new pseudo coloring technique for large scale one-dimensional datasets is proposed. For visualization of a large scale dataset, user interaction is indispensable for selecting focus areas in the dataset. However, ex...
详细信息
A new pseudo coloring technique for large scale one-dimensional datasets is proposed. For visualization of a large scale dataset, user interaction is indispensable for selecting focus areas in the dataset. However, excessive switching of the visualized image makes it difficult for the user to recognize overview/ detail and detail/ detail relationships. The goal of this research is to develop techniques for visualizing details as precisely as possible in overview display. In this paper, visualization of a one-dimensional but very largedataset is considered. The proposed method is based on pseudo coloring, however, each scalar value corresponds to two discrete colors. By painting with two colors at each value, users can read out the value precisely. This method has many advantages: it requires little image space for visualization; both the overview and details of the dataset are visible in one image without distortion; and implementation is very simple. Several application examples, such as meteorological observation data and train convenience evaluation data, show the effectiveness of the method.
Summary form only given. Illuminator is a visualization toolkit closely coupled with parallel computation and storage. Its close links to PETSc (the portable extensible toolkit for scientific computation) make it idea...
详细信息
Summary form only given. Illuminator is a visualization toolkit closely coupled with parallel computation and storage. Its close links to PETSc (the portable extensible toolkit for scientific computation) make it ideal for display of finite difference calculation results. During a simulation, Illuminator functions provide parallel storage of PETSc distributed array objects across cluster hard drives with optional lossless or lossy compression. This lets cluster machines operate as a large RAID0 (or optionally RAID1) array, with fast storage and retrieval directly to/from the compute nodes with zero network load. visualization is similarly parallel: compute nodes load simulation results from their hard drives and calculate triangle locations and colors for contour surfaces of scalar fields. Triangles are then rendered using one of three methods: send all triangle data to the head node and display using Geomview; locally render triangles using Imlib2 and layer images in a binary tree; locally render semitransparent triangles using a multilayer zbuffer, redistribute and assemble these zbuffers for display. Results are discussed in terms of image quality and frame rate
We propose a distributed data management scheme for largedatavisualization that emphasizes efficient data sharing and access. To minimize data access time and support users with a variety of local computing capabili...
详细信息
We propose a distributed data management scheme for largedatavisualization that emphasizes efficient data sharing and access. To minimize data access time and support users with a variety of local computing capabilities, we introduce an adaptive data selection method based on an "enhanced time-space partitioning" (ETSP) tree that assists with effective visibility culling, as well as multiresolution data selection. By traversing the tree, our data management algorithm can quickly identify the visible regions of data, and, for each region, adaptively choose the lowest resolution satisfying user-specified error tolerances. Only necessary data elements are accessed and sent to the visualization pipeline. To further address the issue of sharing large-scale data among geographically distributed collaborative teams, we have designed an infrastructure for integrating our data management technique with a distributed data storage system provided by logistical networking (LoN). data sets at different resolutions are generated and uploaded to LoN for wide-area access. We describe a parallel volume rendering system that verifies the effectiveness of our data storage, selection and access scheme.
In this paper we present a compression scheme for large point scans including per-point normals. For the encoding of such scans we introduce a particular type of closest sphere packing grids, the hexagonal close packi...
详细信息
In this paper we present a compression scheme for large point scans including per-point normals. For the encoding of such scans we introduce a particular type of closest sphere packing grids, the hexagonal close packing (HCP). HCP grids provide a structure for an optimal packing of 3D space, and for a given sampling error they result in a minimal number of cells if geometry is sampled into these grids. To compress the data, we extract linear sequences (runs) of filled cells in HCP grids. The problem of determining optimal runs is turned into a graph theoretical one. Point positions and normals in these runs are incrementally encoded. At a grid spacing close to the point sampling distance, the compression scheme only requires slightly more than 3 bits per point position. Incrementally encoded per-point normals are quantized at high fidelity using only 5 bits per normal. The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way we render gigantic point scans from their compressed representation in local GPU memory at interactive frame rates.
Point-based representations are becoming increasingly common in computer graphics, especially for visualizing data sets where the number of points is large relative to the number of pixels involved in their display. W...
详细信息
ISBN:
(纸本)3905673207
Point-based representations are becoming increasingly common in computer graphics, especially for visualizing data sets where the number of points is large relative to the number of pixels involved in their display. When dealing with sparse point sets, however, many traditional rendering algorithms for point data perform poorly, either by generating blurry or non-occluding surface representations or by requiring extensive pre-processing to yield good results. In this paper we present a novel method for point-based surface visualization that we call Voronoi rasterization. Voronoi rasterization uses modern programmable graphics hardware to generate occluding surface representations from sparse, oriented point sets without preprocessing. In particular, Voronoi rasterization clips away overlapping flaps between neighboring splats and generates an approximation of the Voronoi diagram of the points under the surface's geodesic distance. To approximate smooth shading and texturing on top of this clipped surface, our method uses existing techniques to construct a smoothly blended screen-space attribute field that implicitly accounts for neighborhood relations between points.
In order to gain insight into multivariate data, complex structures must be analysed and understood. parallel coordinates is an excellent tool for visualizing this type of data but has its limitations. This paper deal...
详细信息
In order to gain insight into multivariate data, complex structures must be analysed and understood. parallel coordinates is an excellent tool for visualizing this type of data but has its limitations. This paper deals with one of its main limitations - how to visualize a large number of data items without hiding the inherent structure they constitute. We solve this problem by constructing clusters and using high precision textures to represent them. We also use transfer functions that operate on the high precision textures in order to highlight different aspects of the cluster characteristics. Providing predefined transfer functions as well as the support to draw customized transfer functions makes it possible to extract different aspects of the data. We also show how feature animation can be used as guidance when simultaneously analysing several clusters. This technique makes it possible to visually represent statistical information about clusters and thus guides the user, making the analysis process more efficient.
暂无评论