Prior work has shown that reduced, ordered, binary decision diagrams (BDDs) can be a powerful tool for program trace analysis and visualization. Unfortunately, it can take hours or days to encode large traces as BDDs....
详细信息
ISBN:
(纸本)9781605586359
Prior work has shown that reduced, ordered, binary decision diagrams (BDDs) can be a powerful tool for program trace analysis and visualization. Unfortunately, it can take hours or days to encode large traces as BDDs. Further, techniques used to improve BDD performance are inapplicable to large dynamic program traces. This paper explores the use of ZDDs for compressing dynamic trace data. Prior work has show that ZDDs can represent sparse data sets with less memory compared to BDDs. This paper demonstrates that (1) ZDDs do indeed provide greater compression for sets of dynamic traces (25% smaller than BDDs on average), (2) with proper tuning, ZDDs encode sets of dynamic trace data over 9 x faster than BDDs, and (3) ZDDs can be used for
In this paper, a GPU accelerated three-dimensional finite difference method is presented as an efficient approach of performing fast parallel simulations of composite materials. Using a NVIDIA GeForce 9800 series GPGP...
详细信息
ISBN:
(纸本)9781424449682
In this paper, a GPU accelerated three-dimensional finite difference method is presented as an efficient approach of performing fast parallel simulations of composite materials. Using a NVIDIA GeForce 9800 series GPGPU and with an optimized CUDA implementation, a considerable speed-up (>20) was observed for simulations of large size problems. Further performance improvements could be achieved by further efficient data transfer and access between shared memory and global memory, using distributed computing with multiple video cards, , and etc.
The huge storage for large terrain datasets and high resolution images have been the most important factors that influence the performance for interactive large-scale terrain rendering. To solve the problem, researche...
详细信息
ISBN:
(纸本)9781424495665
The huge storage for large terrain datasets and high resolution images have been the most important factors that influence the performance for interactive large-scale terrain rendering. To solve the problem, researchers partition a whole terrain into small tiles and utilize GPU (graphics Processing Unit) to render them respectively. But usually the different levels of detail between adjacent tiles will induce cracks between them which will decrease the fidelity greatly. Many algorithms have presented solutions to overcome the disadvantage but haven't achieved absolutely success either by the complexity or by the bad effect. In this paper we present an implicit and GPU-based method to overcome the traditional drawbacks. We will partition the square terrain datasets into triangular tiles and predefine the amount of the inner triangles of each tile to 2(m) x 2(m) (m>1). Then at runtime, we force the maximum level difference of the adjacent tiles to 1. And this will connect the terrain tiles seamlessly. And in the paper we transform the DEM data into vertex texture, and store the data in GPU. And finally we use the advanced shader language HLSL to implement the operation of fetching height value. The above improvements have achieved great effect.(1)
large high dimension datasets are of growing importance in many fields and it is important to be able to visualize them for understanding the results of data mining approaches or just for browsing them in a way that d...
详细信息
Graphic Processing Unit (GPU) has involved into a parallel computation for it's massively multi threaded architecture. Due to its high computational power, GPU has been used to deal with many problems that can be ...
详细信息
ID shadow-maps are used for robust real-time rendering of shadows. The primary disadvantage of using shadow-maps is their excessive size for large scenes in case high quality shadows are needed. To eliminate large mem...
详细信息
The proceedings contain 28 papers. The topics discussed include: optimizing the reliability of pipelined applications under throughput constraints;decomposition based algorithm for state prediction in large scale dist...
ISBN:
(纸本)9780769541204
The proceedings contain 28 papers. The topics discussed include: optimizing the reliability of pipelined applications under throughput constraints;decomposition based algorithm for state prediction in large scale distributed systems;operational semantics of the Marte repetitive structure modeling concepts for data-parallel applications design;butterfly automorphisms and edge faults;cost performance analysis in multi-level tree networks;pretty good accuracy in matrix multiplication with GPUs;exploiting the power of GPUs for multi-gigabit wireless baseband processing;parallel cycle based logic simulation using graphics processing units;practical uniform peer sampling under churn;improving grid fault tolerance by means of global behavior modeling;toward a reliable distributed data management system;energy minimization on thread-level speculation in multicore systems;and resource-aware compiler prefetching for many-cores.
Interactive visualization requires the translation of data into a screen space of limited resolution. While currently ignored by most visualization models, this translation entails a loss of information and the introd...
详细信息
Interactive visualization requires the translation of data into a screen space of limited resolution. While currently ignored by most visualization models, this translation entails a loss of information and the introduction of a number of artifacts that can be useful, (e.g., aggregation, structures) or distracting (e.g., over-plotting, clutter) for the analysis. This phenomenon is observed in parallel coordinates, where overlapping lines between adjacent axes form distinct patterns, representing the relation between variables they connect. However, even for a small number of dimensions, the challenge is to effectively convey the relationships for all combinations of dimensions. The size of the dataset and a large number of dimensions only add to the complexity of this problem. To address these issues, we propose Pargnostics, parallel coordinates diagnostics, a model based on screen-space metrics that quantify the different visual structures. Pargnostics metrics are calculated for pairs of axes and take into account the resolution of the display as well as potential axis inversions. Metrics include the number of line crossings, crossing angles, convergence, over-plotting, etc. To construct a visualization view, the user can pick from a ranked display showing pairs of coordinate axes and the structures between them, or examine all possible combinations of axes at once in a matrix display. Picking the best axes layout is an NP-complete problem in general, but we provide a way of automatically optimizing the display according to the user's preferences based on our metrics and model.
Today we can have huge datasets resulting from computer simulations (FEA, CFD, physics, chemistry etc) and sensor measurements (medical, seismic and satellite). There is exponential growth in computational requirement...
详细信息
Multi-dimensional transfer functions are widely used to provide appropriate data classification for direct volume rendering. Nevertheless, the design of a multi-dimensional transfer function is a complicated task. In ...
详细信息
暂无评论