With dental imaging data acquired at unprecedented speed and resolution, traditional serial image processing and single-node storage need to be re-examined in a "Bigdata" context. Furthermore, most previous ...
详细信息
ISBN:
(纸本)9781479952151
With dental imaging data acquired at unprecedented speed and resolution, traditional serial image processing and single-node storage need to be re-examined in a "Bigdata" context. Furthermore, most previous dental computing has focused on the actual imaging acquisition and image analysis tools, while much less research has focused on enabling caries assessment via visual analysis of large dental imaging data. In this paper we present DENVIS, an end-to-end solution for cariologists to manage, mine, visualize, and analyze large dental imaging data for investigative carious lesion studies. DENVIS consists of two main parts: data driven image analysis modules triggered by imaging data acquisition that exploit parallel MapReduce tasks and ingest visualization archive into a distributed NoSQL store, and user driven modules that allow investigative analysis at run time. DENVIS has seen early use by our collaborators in oral health research, where our system has been used to pose and answer domain-specific questions for quantitative assessment of dynamic carious lesion activities.
We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a la...
详细信息
We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.
We propose a new application of stochastic point-based rendering, which was recently proposed for implicit surfaces, to large-scale laser-scanned 3D point data. Specifically, we propose a scheme to apply the rendering...
详细信息
ISBN:
(纸本)9781479928736
We propose a new application of stochastic point-based rendering, which was recently proposed for implicit surfaces, to large-scale laser-scanned 3D point data. Specifically, we propose a scheme to apply the rendering to transparent and fused visualization of recent large and complex laser-scanned data from cultural assets. Our scheme uses 3D points that are directly acquired using a laser scanner as the rendering primitives. For laser-scanned data that consist of more than 10(7) or 10(8) 3D points, the pre-processing stage takes only a few minutes, and the rendering stage is executable at interactive frame rates. We do not encounter rendering artifacts originating from the indefiniteness of depth-sorted orders of rendering primitives. Fused visualization with various visual assistants is also possible. We demonstrate the effectiveness of our scheme by visualizing a campus building and a culturally important festival float.
The GPU-Services project fits into the context of research and development of methods for data processing of three-dimensional sensors data applied to mobile robotics. Such methods are called services on this project,...
详细信息
The GPU-Services project fits into the context of research and development of methods for data processing of three-dimensional sensors data applied to mobile robotics. Such methods are called services on this project, which include 3D point clouds pre-processing algorithms, segmentation of the data, separation and identification of planar zones (ground, roads), and detection of elements of interest (edges, obstacles). Due to the large amount of data to be processed in a short time, these services will use parallel processing elements, using the GPU to perform partial or complete processing of these data. The project aims to provide services for an autonomous car, forcing the services to approach a system for real-time processing, which should complete the whole data processing before the next frame came from the sensors (~10 to 20Hz). The sensor data is structured in the form of a cloud of points, allowing for great parallel processing. However, its major difficulty is the high rate of data received from the sensor (around 700,000 points/sec), and this gives the motivation of this project: to use the full potential of sensor and to efficiently use the parallelism of GPU programming. The GPU services are divided into steps, but always seeking the processing speed given by their intrinsic parallelism: The first step is to organize an environment for parallel processing development in conjunction with the system already being used in our autonomous car, The second step is an intelligent extraction and reorganization of the data provided by the sensor (Velodyne multi-layer laser sensor), The third stage is a pre-segmentation of non-planar data, The fourth stage is performing the segmentation of data received from the previous steps in order to find objects, curbs and ground plane, The fifth stage is to develop a methodology that unite the results of previous steps in order to explore the topology of the environment, i.e. Will aim to structure the results into a topologi
State-of-the-art cosmological simulations regularly contain billions of particles, providing scientists the opportunity to study the evolution of the Universe in great detail. However, the rate at which these simulati...
详细信息
ISBN:
(纸本)9781479952151
State-of-the-art cosmological simulations regularly contain billions of particles, providing scientists the opportunity to study the evolution of the Universe in great detail. However, the rate at which these simulations generate data severely taxes existing analysis techniques. Therefore, developing new scalable alternatives is essential for continued scientific progress. Here, we present a dataparallel, friends-of-friends halo finding algorithm that provides unprecedented flexibility in the analysis by extracting multiple linking lengths. Even for a single linking length, it is as fast as the existing techniques, and is portable to multi-threaded many-core systems as well as co-processing resources. Our system is implemented using PISTON and is coupled to an interactive analysis environment used to study halos at different linking lengths and track their evolution over time.
This paper extends and evaluates a family of dynamic ray scheduling algorithms that can be performed in-situ on large distributed memory parallel computers. The key idea is to consider both ray state and data accesses...
详细信息
This paper extends and evaluates a family of dynamic ray scheduling algorithms that can be performed in-situ on large distributed memory parallel computers. The key idea is to consider both ray state and data accesses when scheduling ray computations. We compare three instances of this family of algorithms against two traditional statically scheduled schemes. We show that our dynamic scheduling approach can render data sets that are larger than aggregate system memory and that cannot be rendered by existing statically scheduled ray tracers. For smaller problems that fit in aggregate memory but are larger than typical shared memory, our dynamic approach is competitive with the best static scheduling algorithm.
In this paper, we present a novel scalable approach for visualizing multivariate unsteady flow data with Lagrangian-based Attribute Space Projection (LASP). The distances between spatiotemporal samples are evaluated b...
详细信息
ISBN:
(纸本)9781479928736
In this paper, we present a novel scalable approach for visualizing multivariate unsteady flow data with Lagrangian-based Attribute Space Projection (LASP). The distances between spatiotemporal samples are evaluated by their attribute values along the advection directions in the flow field. The massive samples are then projected into 2D screen space for feature identification and selection. A hybrid parallel system, which tightly integrates a MapReduce-style particle tracer with a scalable algorithm for the projection, is designed to support the large scale analysis. Results show that the proposed methods and system are capable of visualizing features in the unsteady flow, which couples multivariate analysis of vector and scalar attributes with projection.
Performance analysis through visualization techniques usually suffers semantic limitations due to the size of parallel applications. Most performance visualization tools rely on data aggregation to work at scale, with...
详细信息
ISBN:
(纸本)9781479936069
Performance analysis through visualization techniques usually suffers semantic limitations due to the size of parallel applications. Most performance visualization tools rely on data aggregation to work at scale, without any attempt to evaluate the loss of information caused by such aggregations. This paper proposes a technique to evaluate the quality of aggregated representations - using measures from information theory - and to optimize such measures in order to build consistent multiresolution representations of large execution traces.
The development of supercomputers has successfully helped us to carry on complicated simulation with exploded size of dataset. For visualizing such kind of large-scale dataset, reducing the data size by using compress...
详细信息
ISBN:
(纸本)9781479928736
The development of supercomputers has successfully helped us to carry on complicated simulation with exploded size of dataset. For visualizing such kind of large-scale dataset, reducing the data size by using compression methods is one of the most useful approach. Moreover, parallelization of compression algorithm can greatly improve the efficiency and resolve the limitation of memory size. However, in parallel compression algorithm, interprocessor communication is indispensable, while it is also a bottleneck problem, especially for the general cases that the number of processors is not power-of-two. parallel POD (proper orthogonal decomposition) compression algorithm [2] is such an example, the number of time steps must be power-of-two for the binary swap scheme. A method that can fully resolve this problem with low computational cost will be very popular. In this paper, we proposed such an approach called 2-3-4 combination approach, which can be simply implemented and also reach high performance of parallel computing algorithms. Furthermore, our method can obtain the best balance among all parallel computing processors. This is achieved by transferring the non-power-of-two problem into power-of-two problem to fully use the best balance feature of binary swap method. We evaluate our approach through applying it to the parallel POD compression algorithm on the K computer.
Recognition task is a hard problem due to the high dimension of input image data. The principal component analysis (PCA) is the one of the most popular algorithms for reducing the dimensionality. The main constraint o...
详细信息
ISBN:
(纸本)9781479976300
Recognition task is a hard problem due to the high dimension of input image data. The principal component analysis (PCA) is the one of the most popular algorithms for reducing the dimensionality. The main constraint of PCA is the execution time in terms of updating when new data is included;therefore, parallel computation is needed. Opening the GPU architectures to general purpose computation allows performing parallel computation on a powerful platform. In this paper the modified version of fast PCA (MFPCA) algorithm is presented on the GPU architecture and also the suitability of the algorithm for face recognition task is discussed. The performance and efficiency of MFPCA algorithm is studied on large-scale datasets. Experimental results show a decrease of the MFPCA algorithm execution time while preserving the quality of the results.
暂无评论