Extensive sea ice in Arctic marine regions has hindered indepth exploration and research. To overcome this frontier, underwater robots are increasingly used to explore beneath the ice cover. While this enables researc...
详细信息
ISBN:
(数字)9798331534189
ISBN:
(纸本)9798331534196
Extensive sea ice in Arctic marine regions has hindered indepth exploration and research. To overcome this frontier, underwater robots are increasingly used to explore beneath the ice cover. While this enables research over a broader area, it increases the risk of losing valuable vehicles under the ice, placing greater emphasis on the navigational capabilities of the robots. While inertial and acoustical navigation methods share limitations for operating under the ice, visual navigation can utilize optical features for position estimation at a very fine resolution. However, the underside of ice tends to lack visible features. To overcome this, we propose integrating a novel Underwater Hyperspectral Imager. This technology captures detailed spectral information, revealing features and patterns invisible to traditional imaging methods. To test this, a Remotely Operated Vehicle was equipped with an upward-pointing hyperspectral imager. Three Arctic expeditions were conducted, capturing under-ice data including an Arctic ice algae bloom. To determine whether the hyperspectral imager provides improved navigation capabilities, we created five spectral transformations utilizing the spectral quantity of the hyperspectral domain by either increasing the number of layers fed into the detection algorithm (5-Band, 10-Band and 20-band) or by transforming the spectra using Principal Component analysis (PCA) and the Pearson correlation. We then compared these to four transformations which resembled the behaviour of an RGB/grayscale equivalent. We found that the hyperspectral methods showed an improvement in precision. These improvements in precision highlight the potential of hyperspectral imagina for robust navigation in challenging under-ice environments.
largedatavisualization and analysis faces challenges related to performance, operability, degree of discrimination, etc.. In this paper, an advanced aggregate computation is proposed to solve these issues from three...
详细信息
ISBN:
(纸本)9781467385176
largedatavisualization and analysis faces challenges related to performance, operability, degree of discrimination, etc.. In this paper, an advanced aggregate computation is proposed to solve these issues from three aspects. By virtue of visualization-based data separation and aggregation, a largedataset is mapped to a visualization-based small dataset for efficient visualization while keeping operability of data. A minimum size of visual primitives for aggregated data is defined to ensure visibility of important but tiny information. And a D3-based rendering implementation improves the performance of consecutive visualizations.
Prioritization of data is necessary for managing large-scale scientific data, as the scale of the data implies that there are only enough resources available to process a limited subset of the data. For example, data ...
详细信息
ISBN:
(纸本)9781479952151
Prioritization of data is necessary for managing large-scale scientific data, as the scale of the data implies that there are only enough resources available to process a limited subset of the data. For example, data prioritization is used during in situ triage to scale with bandwidth bottlenecks, and used during focus+context visualization to save time during analysis by guiding the user to important information. In this paper, we present ADR visualization, a generalized analysis framework for ranking large-scale data using analysis -Driven Refinement (ADR), which is inspired by Adaptive Mesh Refinement (AMR). A large-scale data set is partitioned in space, time, and variable, using user-defined importance measurements for prioritization. This process creates a prioritization tree over the data set. Using this tree, selection methods can generate sparse data products for analysis, such as focus+context visualizations or sparse data sets.
Lossy compression is a data compression technique that sacrifices precision for the sake of higher compression rates. While loss of precision is unacceptable when storing simulation data for check pointing, it has lit...
详细信息
ISBN:
(纸本)9781665432832
Lossy compression is a data compression technique that sacrifices precision for the sake of higher compression rates. While loss of precision is unacceptable when storing simulation data for check pointing, it has little discernable impact on visualization. Saving simulation output for later examination is still a prevalent workflow. Domain scientists often return to data from older runs to examine the data in a new context. Storage of visualizationdata at full precision is not necessary for this purpose. The use of lossy compression can therefore relieve the pressure on HPC storage equipment or be used to store data at higher temporal resolution than without compression. In this poster we show how lossy compression was used to store visualizationdata for the analysis of a supercell thunderstorm. The visual results will be shown as well as details of how the compression was used in the workflow.
For user to analyze largedata set with various attributes as desired, rearranging data based on purpose is vital. This study proposes a method to navigate data with a desired purpose of user by improving RadViz visua...
详细信息
ISBN:
(纸本)9781538668733
For user to analyze largedata set with various attributes as desired, rearranging data based on purpose is vital. This study proposes a method to navigate data with a desired purpose of user by improving RadViz visualization through focusing and filtering. To help understand visualization, user studies were conducted using the music data of Spotify. As a result, our system is discovered to be effective at classifying largedata set of music with attributes and navigating desired music efficiently.
SALOME is an Open Source numerical simulation platform that allows the complete realization of a numerical study. Indeed, the user can easily plug his core numerical code in the platform, define the problem geometry, ...
详细信息
ISBN:
(纸本)9781479952151
SALOME is an Open Source numerical simulation platform that allows the complete realization of a numerical study. Indeed, the user can easily plug his core numerical code in the platform, define the problem geometry, mesh it, define the boundary conditions, run the solver on a supercomputer, visualize the result and analyze the data;all these actions are performed in an integrated and coherent environment. SALOME can deal with large numerical simulations, like the ones found in multi-physics and/or parametric studies. In this context, ParaView has been integrated into the platform to benefit from its capacity to visualize large models. In this abstract we give an overview of the capabilities of SALOME/ParaView.
Current Pose-Guided Person Image Synthesis (PGPIS) methods depend heavily on large amounts of labeled triplet data to train the generator in a supervised manner. However, they often falter when applied to in-the-wild ...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Current Pose-Guided Person Image Synthesis (PGPIS) methods depend heavily on large amounts of labeled triplet data to train the generator in a supervised manner. However, they often falter when applied to in-the-wild samples, primarily due to the distribution gap between the training datasets and real-world test samples. While some researchers aim to enhance model generalizability through sophisticated training procedures, advanced architectures, or by creating more diverse datasets, we adopt the test-time fine-tuning paradigm to customize a pre-trained Text2Image (T2I) model. However, naively applying test-time tuning results in inconsistencies in facial identities and appearance attributes. To address this, we introduce a Visual Consistency Module (VCM), which enhances appearance consistency by combining the face, text, and image embedding. Our approach, named OnePoseTrans, requires only a single source image to generate high-quality pose transfer results, offering greater stability than state-of-the-art data-driven methods. For each test case, OnePoseTrans customizes a model in around 48 seconds with an NVIDIA V100 GPU.
With the increasing power of the HPC hardware systems, numerical simulations are heading towards exa-scale computing. Early inspection and analysis of on-going large simulations enables domain experts to obtain first ...
详细信息
ISBN:
(纸本)9781479952151
With the increasing power of the HPC hardware systems, numerical simulations are heading towards exa-scale computing. Early inspection and analysis of on-going large simulations enables domain experts to obtain first insight into their running simulation process and intermediate results. Compared to conventional post-processing, such in-situ processing has the advantage of keeping data in memory, avoiding to store the large amount of raw data to disk, providing on-the-fly analysis, and preventing early failures in the simulation process. In this poster we present a distributed and scalable software infrastructure, which provides distributed insitu data processing, feature extraction and interactive exploration at user's front-end. We have integrated and extended our system to multiple simulation applications, ranging from Lattice-Boltzmann blood flow simulation to grid based simulation for propulsion systems. A user-interactive front-end is integrated to our system, allowing to directly interact with the visualization of running simulations, gain insight, and make decisions.
large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writ...
详细信息
ISBN:
(纸本)9781665432832
large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
暂无评论