In this paper we present a hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data. A palette-based decoding technique and an a...
详细信息
ISBN:
(纸本)078037200X
In this paper we present a hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3-D graphics card. Using a single PC equipped with a modest amount of memory, a texture capable graphics card, and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 45 millions voxels each time step) at interactive rates, permitting the visual exploration of large scientific data sets in both the temporal and spatial domain.
We present a scalable volume rendering technique that exploits lossy compression and low-cost commodity hardware to permit highly interactive exploration of time-varying scalar volume data. A palette-based decoding te...
详细信息
We present a scalable volume rendering technique that exploits lossy compression and low-cost commodity hardware to permit highly interactive exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3D graphics card. Using a single PC equipped with a modest amount of memory, a texture capable graphics card, and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 42 million voxels each time step) at interactive rates. By clustering multiple PCs together, we demonstrate the data-size scalability of our method. The frame rates achieved make possible the interactive exploration of data in the temporal, spatial, and transfer function domains. A comprehensive evaluation of our method based on experimental studies using data sets (up to 134 million voxels per time step) from turbulence flow simulations is also presented.
This paper presents a simple, effective, yet fast out-of-core method for surface reconstruction based on the vector field surface representation. The algorithm is designed to handle massive amount of real-world point ...
详细信息
ISBN:
(纸本)9781479919864
This paper presents a simple, effective, yet fast out-of-core method for surface reconstruction based on the vector field surface representation. The algorithm is designed to handle massive amount of real-world point clouds representing large infrastructures (e.g. underwater hydroelectric structure) acquired by LiDAR, sonar or laser scanning system using out-of-core techniques. Our method allows performing seamless surface reconstruction from unorganized, unoriented, nonuniform and highly noisy data that include outliers. The applicability of the method has been evaluated in the context of hydroelectric infrastructure inspection, and its performance has been tested using synthetically produced data and field data captured on different Hydro-Quebec's sites by laser line scanning, LiDAR and sonar measurement systems.
The volumetric data set is important in many scientific and biomedical fields. Since such sets may be extremely large, a compression method is critical to store and transmit them. To achieve a high compression rate, m...
详细信息
ISBN:
(纸本)0819447145
The volumetric data set is important in many scientific and biomedical fields. Since such sets may be extremely large, a compression method is critical to store and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. We developed a new context-based non-linear prediction method to preprocess the volume data set in order to effectively lower the prediction entropy. The prediction error is further encoded using Huffman code. Unlike the conventional methods, the volume is divided into cubical blocks to take advantage of the data's spatial locality. Instead of building one Huffman tree for each block, we developed a novel binning algorithm that build a Huffman tree for each group (bin) of blocks. Combining all the effects above, we achieved an excellent compression rate compared to other lossless volume compression methods. In addition, an auxiliary data structure, Scalable Hyperspace File (SHSF) is used to index the huge volume so that we can obtain many other benefits including parallel construction, on-the-fly accessing of compressed data without global decompression, fast previewing, efficient background compressing, and scalability etc.
This article presents a simple framework for progressive processing of high-resolution images with minimal resources. We demonstrate this framework's effectiveness by implementing an adaptive, multi-resolution sol...
详细信息
This article presents a simple framework for progressive processing of high-resolution images with minimal resources. We demonstrate this framework's effectiveness by implementing an adaptive, multi-resolution solver for gradient-based image processing that, for the first time, is capable of handling gigapixel imagery in real time. With our system, artists can use commodity hardware to interactively edit massive imagery and apply complex operators, such as seamless cloning, panorama stitching, and tone mapping. We introduce a progressive Poisson solver that processes images in a purely coarse-to-fine manner, providing near instantaneous global approximations for interactive display (see Figure 1). We also allow for data-driven adaptive refinements to locally emulate the effects of a global solution. These techniques, combined with a fast, cache-friendly data access mechanism, allow the user to interactively explore and edit massive imagery, with the illusion of having a full solution at hand. In particular, we demonstrate the interactive modification of gigapixel panoramas that previously required extensive offline processing. Even with massive satellite images surpassing a hundred gigapixels in size, we enable repeated interactive editing in a dynamically changing environment. Images at these scales are significantly beyond the purview of previous methods yet are processed interactively using our techniques. Finally our system provides a robust and scalable out-of-core solver that consistently offers high-quality solutions while maintaining strict control over system resources.
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that...
详细信息
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the final result. Our approach is to select a tile size that will fit into GPU memory with a halo border of half the network receptive field. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling configurations which do not satisfy the constraints, and we explore the use of architecture effective receptive fields to estimate the tiling parameters.
暂无评论