Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial percept...
详细信息
Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions-with or without the application of assisting force stimuli-have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities;the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces;and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.
Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details ...
详细信息
Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.
Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-capture...
详细信息
ISBN:
(纸本)9798350348156
Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-captured views, the rendering algorithm used, and the camera parameters of the novel view to be synthesized. Numerous methods were proposed for optimizing the set of captured images and enhancing the rendering algorithms. However, from which regions IBR methods can synthesize satisfactory results is not yet well studied. In this work, we introduce the concept of renderability, which predicts the quality of IBR results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes. To demonstrate this capability, we designed 2 VR applications: a path planner that allows users to navigate through sparsely captured scenes with controllable rendering quality and a view selector that provides an overview for a scene from diverse and high quality perspectives. We believe the renderability concept, the proposed evaluation method, and the suggested applications will motivate and facilitate the use of IBR in various interactive settings.
We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful...
详细信息
We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earth's case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earth's atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.
We present an efficient algorithm for visualizing the effect of black holes on its distant surroundings as seen from an observer nearby in orbit. Our solution is GPU-based and builds upon a two-step approach, where we...
详细信息
We present an efficient algorithm for visualizing the effect of black holes on its distant surroundings as seen from an observer nearby in orbit. Our solution is GPU-based and builds upon a two-step approach, where we first derive an adaptive grid to map the 360-view around the observer to the distorted celestial sky, which can be directly reused for different camera orientations. Using a grid, we can rapidly trace rays back to the observer through the distorted spacetime, avoiding the heavy workload of standard tracing solutions at real-time rates. By using a novel interpolation technique we can also simulate an observer path by smoothly transitioning between multiple grids. Our approach accepts real star catalogues and environment maps of the celestial sky and generates the resulting black-hole deformations in real time.
The Individual Nystagmus Training Simulation Experience, or INSITETM, is a virtual human simulation program to help train police officers in identifying one of the strongest clues of alcohol impairment in drivers - ny...
详细信息
ISBN:
(纸本)9781728103006
The Individual Nystagmus Training Simulation Experience, or INSITETM, is a virtual human simulation program to help train police officers in identifying one of the strongest clues of alcohol impairment in drivers - nystagmus, or involuntary rapid movement of the eyeball. In this paper we talk about the design-based research principles that helped us iteratively design, develop, test and implement this police training simulation as part of the Texas' Advanced Roadside Impaired Driving Enforcement (ARIDE) program. We describe the simulation, the educational implementation context, the learning activities, and the identified needs of the various users - including trainees, trainers, researchers and program administrators. Most importantly, we discuss the evolution of the user experience over time in response to feedback. This paper focuses on: 1) design considerations for modeling physiologic symptoms of nystagmus in a virtual human;2) the strategy for implementing INSITETM into ARIDE police training sessions;3) detail on the numerous iterations of the multi-leveled user interface and experience based on qualitative and quantitative feedback from trainees, Standard Field Sobriety Test (SFST) instructors, and subject matter experts;and (4) an overall summary of our experience on this design to date.
暂无评论