Modern large-scale scientific simulations running on HPC systems generate data in the order of terabytes during a single run. To lessen the I/O load during a simulation run, scientists are forced to capture data infre...
详细信息
ISBN:
(纸本)9783642234002
Modern large-scale scientific simulations running on HPC systems generate data in the order of terabytes during a single run. To lessen the I/O load during a simulation run, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Yet, lossless compression techniques are hardly suitable for scientific data due to its inherently random nature;for the applications used here, they offer less than 10% compression rate. They also impose significant overhead during decompression, making them unsuitable for dataanalysis and visualization that require repeated data access. To address this problem, we propose an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible. With ISABELA, we apply a preconditioner to seemingly random and noisy data along spatial resolution to achieve an accurate fitting model that guarantees a >= 0.99 correlation with the original data. We further take advantage of temporal patterns in scientific data to compress data by approximate to 85%, while introducing only a negligible overhead on simulations in terms of runtime. ISABELA significantly outperforms existing lossy compression methods, such as Wavelet compression. Moreover, besides being a communication-free and scalable compression technique, ISABELA is an inherently local decompression method, namely it does not decode the entire data, making it attractive for random access.
Nowadays, in the research field of field-related survey or numerical modeling, data post-processing is often completed using of the third-party softwares, which may have difficult on the storage and management of the ...
详细信息
Nowadays, in the research field of field-related survey or numerical modeling, data post-processing is often completed using of the third-party softwares, which may have difficult on the storage and management of the large amount of data and figures. A convenient way is needed to process the field data. In this paper, ArcGIS is utilized to realize visualization of the tidal current computed from the hydrodynamic model of Caofeidian sea area. In virtue of the interface of ArcGIS and the numerical model, the changes of flow field of the study area before and after the project are shown clearly and friendly, which provides great help and convenience for the dataanalysis.
This paper presents a novel system for analyzing temporal changes in bloggers' activities and interests on a topic through a 3D visualization of dependency structures related to the topic. Having a dependency data...
详细信息
This paper presents a novel system for analyzing temporal changes in bloggers' activities and interests on a topic through a 3D visualization of dependency structures related to the topic. Having a dependency database built from a blog archive, our 3D visualization framework helps users to interactively exploring temporal changes in bloggers' activities and interests related to the topic.
To make decisions about the long-term preservation and access of large digital collections, archivists gather information such as the collections' contents, their organizational structure, and their file format co...
详细信息
To make decisions about the long-term preservation and access of large digital collections, archivists gather information such as the collections' contents, their organizational structure, and their file format composition. To date, the process of analyzing a collection - from data gathering to exploratory analysis and final conclusions - has largely been conducted using pen and paper methods. To help archivists analyze large-scale digital collections for archival purposes, we developed an interactive visual analytics application. The application narrows down different kinds of information about the collection, and presents them as meaningful data views. Multiple views and analysis features can be linked or unlinked on demand to enable researchers to compare and contrast different analyses, and to identify trends. We describe and present two user scenarios to show how the application allowed archivists to learn about a collection with accuracy, facilitated decision-making, and helped them arrive at conclusions.
Much of the work conducted in climate research involves large and heterogeneous datasets with spatial and temporal references. This makes climate research an interesting application area for visualization. However, th...
详细信息
Much of the work conducted in climate research involves large and heterogeneous datasets with spatial and temporal references. This makes climate research an interesting application area for visualization. However, the application of interactive visual methods to assist in gaining insight into climate data is still hampered for climate research scientists, who are usually not visualization experts. In this paper, we report on a survey that we conducted to evaluate the application of interactive visualization methods and to identify the problems related to establishing such methods in scientific practice. The feedback from 76 participants shows clearly that state-of-the-art techniques are rarely applied and that integrating existing solutions smoothly into the scientists workflow is problematic. We have begun to change this and present first results that illustrate how interactive visualization tools can be successfully applied to accomplish climate research tasks. As a concrete example, we describe the visualization of climate networks and its benefits for climate impact research.
The wealth of data amassed by the utilization of various high-throughput techniques, in various layers of molecular dissection, stresses the critical role of the unification of the computational methodologies applied ...
详细信息
ISBN:
(纸本)9781424441211
The wealth of data amassed by the utilization of various high-throughput techniques, in various layers of molecular dissection, stresses the critical role of the unification of the computational methodologies applied in biological data handling, storage, analysis and visualization. In this article, a generic workflow is showcased in a multi-omic dataset that is used to study Obstructive Nephropathy (ON) in children, integrating microarray data from several biological layers (transcriptomic, post-transcriptomic, proteomic). The workflow exploits raw measurements and through several analytical stages (preprocessing, statistical and functional), which entail various parsing steps, reaches the visualization stage of the heterogeneous, broader, molecular interacting network derived. This network, where the interconnected entities are exploiting the knowledge stored in public repositories, represents a systems level interpretation of the pathological state probed.
On the characteristics of cloud computing research, build a server platform, highlighting the rapid processing of data using it to store data capacity and security, in this based on OpenGL-based visualization design o...
详细信息
On the characteristics of cloud computing research, build a server platform, highlighting the rapid processing of data using it to store data capacity and security, in this based on OpenGL-based visualization design of mine .With three-dimensional data fusion, the abstract data becomes intuitive, image, improve rapid response capabilities, improve the modernization level of the mine management.
The paper proposes a new method for interactive visual exploration of the chains of financial transactions, assisting an analyst in the detection of money laundering operations. The method mainly concerns searching, d...
详细信息
The paper proposes a new method for interactive visual exploration of the chains of financial transactions, assisting an analyst in the detection of money laundering operations. The method mainly concerns searching, displaying and annotating selected groups of transactions from a database. We show how one can programmatically and interactively reduce the volume of the chains surveyed and limit the analysis to the most suspicious transactions. In order to improve readability of the transaction graph, an evolution-based algorithm has been designed to optimize its visual representation. The system is verified on the real-life database of financial transactions. The experiments conducted have shown that allowing visual exploration, one can accelerate the search process and enrich the dataanalysis.
This paper presents the research and development of a new omni spatial visualization framework for the collaborative interrogation of the world's largest Buddhist textual canon, using the worlds' first panoram...
详细信息
This paper presents the research and development of a new omni spatial visualization framework for the collaborative interrogation of the world's largest Buddhist textual canon, using the worlds' first panoramic stereoscopic visualization environment - the Advanced visualization and Interaction Environment (AVIE). The work is being undertaken at a new research facility, The Applied Laboratory for Interactive visualization and Embodiment (ALiVE), City University of Hong Kong. The dataset used is the Chinese Buddhist Canon, Koryo version (Tripitaka Korean a) in classical Chinese, the with 52 million glyphs carved on 83,000 printing blocks in 13th century Korea. The digitized version of this Canon (a project led by University of California Berkeley) contains metadata that links to geospatial positions, contextual images of locations referenced in the text, and to the original rubbings of the wooden blocks. Each character has been abstracted to a 'blue dot' to enable rapid search and pattern visualization. Omni spatial refers to the ability to distribute this data in 360-degrees around the user where the virtually presented visual space is in three dimensions (3D). The project's omni directional interactive techniques for corpora representation and interrogation offer a unique framework for enhanced cognition and perception in the analysis of this dataset.
In this work, we explore the relationship between topic models and co-maintenance history by introducing a visualization that compares conceptual cohesion within change lists. We explain how this view of the project h...
详细信息
In this work, we explore the relationship between topic models and co-maintenance history by introducing a visualization that compares conceptual cohesion within change lists. We explain how this view of the project history can give insight about the semantic architecture of the code, and we identify a number of patterns that characterize particular kinds of maintenance tasks. We examine the relationship between co-maintenance history and concept location, and visualize the distribution of changes across concepts to show how these techniques can be used to predict co-maintenance of source code methods.
暂无评论