large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or...
详细信息
ISBN:
(纸本)9781509036820
large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.
The 2017 visualization Career Award goes to Charles (Chuck) Hansen in recognition for his contributions to large scale datavisualization, including advances in parallel and volume rendering, novel interaction techniq...
详细信息
The 2017 visualization Career Award goes to Charles (Chuck) Hansen in recognition for his contributions to large scale datavisualization, including advances in parallel and volume rendering, novel interaction techniques, and techniques for exploiting hardware; for his leadership in the community as an educator, program chair, and editor; and for providing vision for the development and support of the field. The ieeevisualization & graphics Technical Committee (VGTC) is pleased to award Charles Hansen the 2017 visualization Career Award.
We propose a novel memory efficient parallel ray casting algorithm for unstructured grid. To reduce the high memory consumption of a previous work (i.e. Bunyk), we use a small size of local buffer for each thread to k...
详细信息
The physically derived micro-modeling circuit is an order-reduced RLC circuit of a PEEC circuit for a large-scale packaging problem. Unlike traditional model order reduction (MOR) methods, this method can reduce the o...
详细信息
ISBN:
(纸本)9781509063604
The physically derived micro-modeling circuit is an order-reduced RLC circuit of a PEEC circuit for a large-scale packaging problem. Unlike traditional model order reduction (MOR) methods, this method can reduce the order of the circuit by an order of magnitude without any matrix inversions and decompositions. As its dominant computation is outer products of a vector by itself, the scheme is highly suitable for parallel computation. This paper proposes an effective collaborative acceleration strategy for deriving a micro-modeling circuit using a GPU module. The strategy combines an efficient parallel computation of a vector outer products using GPU and an I/O optimization for data transfer between CPU and GPU. A numerical example shows that the proposed acceleration method by parallel computation for deriving the micro-modeling circuit is nearly proportional to the number of computing cores. It is demonstrated that the micro-modeling scheme is highly suitable for a large-scale interconnection and packaging problem.
We present a study of scientific visualization on heterogeneous processors using the Legion runtime system. We describe the main functions in our approach to conduct scientific visualization that can consist of multip...
详细信息
ISBN:
(纸本)9781509056590
We present a study of scientific visualization on heterogeneous processors using the Legion runtime system. We describe the main functions in our approach to conduct scientific visualization that can consist of multiple operations with different data requirements. Our approach can help users simplify programming on the data partition, data organization and data movement for distributed-memory heterogeneous architectures, thereby facilitating a simultaneous execution of multiple operations on modern and future supercomputers. We demonstrate the scalable performance and the easy usage of our approach by a hybrid data partitioning and distribution scheme for different data types using both CPUs and GPUs on a heterogeneous system.
parallel Coordinate Plots (PCPs) is one of the most powerful techniques for the visualization of multivariate data. However, for largedatasets, the representation suffers from clutter due to overplotting. In this cas...
详细信息
parallel Coordinate Plots (PCPs) is one of the most powerful techniques for the visualization of multivariate data. However, for largedatasets, the representation suffers from clutter due to overplotting. In this case, discerning the underlying data information and selecting specific interesting patterns can become difficult. We propose a new and simple technique to improve the display of PCPs by emphasizing the underlying data structure. Our Orientation-enhanced parallel Coordinate Plots (OPCPs) improve pattern and outlier discernibility by visually enhancing parts of each PCP polyline with respect to its slope. This enhancement also allows us to introduce a novel and efficient selection method, the Orientation-enhanced Brushing (O-Brushing). Our solution is particularly useful when multiple patterns are present or when the view on certain patterns is obstructed by noise. We present the results of our approach with several synthetic and real-world datasets. Finally, we conducted a user evaluation, which verifies the advantages of the OPCPs in terms of discernibility of information in complex data. It also confirms that O-Brushing eases the selection of data patterns in PCPs and reduces the amount of necessary user interactions compared to state-of-the-art brushing techniques.
Compositing is a significant factor in distributed visualization performance at scale on high-performance computing (HPC) systems. For applications such as Para VieworVisIt, the common approach is “sort-las...
详细信息
ISBN:
(纸本)9781509056590
Compositing is a significant factor in distributed visualization performance at scale on high-performance computing (HPC) systems. For applications such as Para VieworVisIt, the common approach is “sort-last” rendering. For this approach, data are split up to be rendered such that each MPI rank has one or more portions of the over-all domain. After rendering its individual piece(s), each rank has one or more partial images that must be composited with the others to form the final image. The common approach for this step is to use a tree-like communication pattern to reduce the rendered images down to a single image to be displayed to the user. A variety of algorithms have been explored to perform this step efficiently in order to achieve interactive rendering on massive systems [7, 3, 8, 4].
Particle data are commonly visualized by rendering a sphere for each particle. Since interactive rendering usually relies on fast local lighting, the spatial arrangement of the spheres is often very hard to perceive. ...
详细信息
Particle data are commonly visualized by rendering a sphere for each particle. Since interactive rendering usually relies on fast local lighting, the spatial arrangement of the spheres is often very hard to perceive. That is, larger functional structures formed by the particles are not easily recognizable. Using global effects such as ambient occlusion or shadows adds important depth cues. In this work, we present Implicit Sphere Shadow Maps (ISSM), an application-tailored approach for large, dynamic particle data sets. This approach can be combined with state-of-the-art object-space ambient occlusion to further emphasize the spatial structure of molecules. We compare our technique against state-of-the-art methods for interactive rendering with respect to image quality and performance.
Machine learning, especially deep learning, is revolutionizing how many engineering problems are being solved. Three critical ingredients are needed to apply deep machine learning to significant real world problems: i...
详细信息
Machine learning, especially deep learning, is revolutionizing how many engineering problems are being solved. Three critical ingredients are needed to apply deep machine learning to significant real world problems: i.) largedata sets; ii.) software to implement deep learning and; iii.) significant computing cycles. This paper discusses the state of each ingredient with a specific focus on: a.) how deep learning can apply to large-scale social network analysis and; b.) the computing resources required to make such analyses feasible.
Plans for exascale computing have identified power and energy as looming problems for simulations running at that scale. In particular, writing to disk all the data generated by these simulations is becoming prohibiti...
详细信息
Plans for exascale computing have identified power and energy as looming problems for simulations running at that scale. In particular, writing to disk all the data generated by these simulations is becoming prohibitively expensive due to the energy consumption of the supercomputer while it idles waiting for data to be written to permanent storage. In addition, the power cost of data movement is also steadily increasing. A solution to this problem is to write only a small fraction of the data generated while still maintaining the cognitive fidelity of the visualization. With domain scientists increasingly amenable towards adopting an in-situ framework that can identify and extract valuable data from extremely large simulation results and write them to permanent storage as compact images, a large-scale simulation will commit to disk a reduced dataset of data extracts that will be much smaller than the raw results, resulting in a savings in both power and energy. The goal of this paper is two-fold: (i) to understand the role of in-situ techniques in combating power and energy issues of extreme-scale visualization and (ii) to create a model for performance, power, energy, and storage to facilitate what-if analysis. Our experiments on a specially instrumented, dedicated 150-node cluster show that while it is difficult to achieve power savings in practice using in-situ techniques, applications can achieve significant energy savings due to shorter write times for in-situ visualization. We present a characterization of power and energy for in-situ visualization; an application-aware, architecture-specific methodology for modeling and analysis of such in-situ workflows; and results that uncover indirect power savings in visualization workflows for high-performance computing (HPC).
暂无评论