virtualreality (VR) affords great freedom in how one represents themselves in virtual interactions through the selection of different avatars. However, it remains unclear which avatar should be chosen for a given soc...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
virtualreality (VR) affords great freedom in how one represents themselves in virtual interactions through the selection of different avatars. However, it remains unclear which avatar should be chosen for a given social scenario. Social interaction often relies on the establishment of trust. Are people more likely to trust you if you select a highly realistic avatar or is there flexibility in representation? This work presents a study exploring this question using a high stakes medical scenario. Participants meet three different doctors with three different style levels: realistic, caricatured, and an in-between "Mid" level. Trust ratings are largely consistent across the style levels, but participants were more likely to select doctors with the "Mid" level of stylization for a second opinion. There is a clear preference against one of the three doctor identities, with evidence that this may be related to movement features.
Foveated rendering (FR) improves the rendering performance of virtualreality (VR) by allocating fewer computational loads in the peripheral field of view (FOV). Existing FR techniques are built based on the radially ...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
Foveated rendering (FR) improves the rendering performance of virtualreality (VR) by allocating fewer computational loads in the peripheral field of view (FOV). Existing FR techniques are built based on the radially symmetric regression model of human visual acuity. However, horizontal-vertical asymmetry (HVA) and vertical meridian asymmetry (VMA) in the cortical magnification factor (CMF) of the human visual system have been evidenced by retinotopy research of neuroscience, suggesting the radially asymmetric regression of visual acuity. In this paper, we begin with functional magnetic resonance imaging (fMRI) data, construct an anisotropic CMF model of the human visual system, and then introduce the first radially asymmetric regression model of the rendering precision for FR applications. We conducted a pilot experiment to adapt the proposed model to VR head-mounted displays (HMDs). A user study demonstrates that retinotopic foveated rendering (RFR) provides participants with perceptually equal image quality compared to typical FR methods while reducing fragments shading by 27.2% averagely, leading to the acceleration of 1/6 for graphics rendering. We anticipate that our study will enhance the rendering performance of VR by bridging the gap between retinotopy research in neuroscience and computergraphics in VR.
Large-scale fluid simulation is widely useful in various virtualreality (VR) applications. While physics-based fluid animation holds the promise of generating highly realistic fluid details, it often imposes signific...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
Large-scale fluid simulation is widely useful in various virtualreality (VR) applications. While physics-based fluid animation holds the promise of generating highly realistic fluid details, it often imposes significant computational demands, particularly when simulating high-resolution fluid for VR. In this paper, we propose a novel foveated fluid simulation method that enhances both the visual quality and computational efficiency of physics-based fluid simulation in VR. To leverage the natural foveation feature of human vision, we divide the visible domain of the fluid simulation into foveal, peripheral, and boundary regions. Our foveated fluid system dynamically allocates computational resources, striking a balance between simulation accuracy and computational efficiency. We implement this approach using a multi-scale method. To evaluate the effectiveness of our approach, we have conducted subjective studies. Our findings show a significant reduction in computational resource requirements, resulting in a speedup of up to 2.27 times. It is crucial to note that our method preserves the visual quality of fluid animations at a level that is perceptually identical to full-resolution outcomes. Additionally, we investigate the impact of various metrics, including particle radius and viewing distance, on the visual effects of fluid animations. Our work provides new techniques and evaluations tailored to facilitate real-time foveated fluid simulation in VR, which can enhance the efficiency and realism of fluids in VR applications.
virtualreality (VR) technologies enable strong emotions compared to traditional media, stimulating the brain in ways comparable to real-life interactions. This makes VR systems promising for research and applications...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
virtualreality (VR) technologies enable strong emotions compared to traditional media, stimulating the brain in ways comparable to real-life interactions. This makes VR systems promising for research and applications in training or rehabilitation, to imitate realistic situations. Nonetheless, the evaluation of the user experience in immersive environments is daunting, the richness of the media presents challenges to synchronise context with behavioural metrics in order to provide fine-grained personalised feedback or performance evaluation. The variety of scenarios and interaction modalities multiplies this difficulty of user understanding in face of lifelike training scenarios, complex interactions, and rich context. We propose a task-based methodology that provides fine-grained descriptions and analyses of the experiential user experience (UX) in VR that (1) aligns low-level tasks (i.e. take an object, go somewhere) with multivariate behaviour metrics: gaze, motion, skin conductance, (2) defines performance components (i.e., attention, decision, and efficiency) with baseline values to evaluate task performance, and (3) characterises task performance with multivariate user behaviour data. To illustrate our approach, we apply the task-based methodology to an existing dataset from a road crossing study in VR. We find that the task-based methodology allows us to better observe the experiential UX by highlighting fine-grained relations between behaviour profiles and task performance, opening pathways to personalised feedback and experiences in future VR applications.
Gaze input is a popular hands-free input method that allows for intuitive and rapid pointing but lacks a confirmation mechanism. This study introduces GazePuffer, an interaction method that combines puffing cheeks wit...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
Gaze input is a popular hands-free input method that allows for intuitive and rapid pointing but lacks a confirmation mechanism. This study introduces GazePuffer, an interaction method that combines puffing cheeks with gaze. We explored the design space of mouth gestures, proposed a set of candidate gestures, filtered them through user subjective evaluation, and selected five basic gestures and four variations. We determined the corresponding virtualreality (VR) actions for these gestures through brainstorming. We achieved an accuracy of 93.8% in recognizing the five basic mouth gestures using the built-in sensors of the head-mounted display devices. We compared GazePuffer with two baseline methods in target selection tasks, demonstrating that GazePuffer is on par with Gaze&Pinch in throughput and speed, slightly outperforming Gaze&Dwell. Finally, we showcased the applicability of GazePuffer in real VR interaction tasks, with users generally finding it usable and effortless.
We propose an automatic method to transfer the UI binding from a rigged model to a new target mesh. We use feed forward neural networks to find the mapping functions between bones and controllers of source model. The ...
详细信息
ISBN:
(纸本)9798350348392
We propose an automatic method to transfer the UI binding from a rigged model to a new target mesh. We use feed forward neural networks to find the mapping functions between bones and controllers of source model. The learned mapping networks then become the initial weights of an auto-encoder. Then the auto-encoder is retrained using target controllers-bones pairs obtained by the mesh transfer and bones decoupling method. Our system only requires neutral expression of the target person but allows artists to customize other basic expressions, and is evaluated by the semantic reproducibility of basic expressions and the semantic similarity.
Today virtualreality (VR) technologies became more and more widespread and found strong applications in various domains. However the fear to experience motion sickness is still an important barrier for new VR users. ...
详细信息
ISBN:
(数字)9781665453653
ISBN:
(纸本)9781665453653
Today virtualreality (VR) technologies became more and more widespread and found strong applications in various domains. However the fear to experience motion sickness is still an important barrier for new VR users. Instead of moving physically, VR users experience virtual locomotion but their vestibular systems do not sense the self-motion that are visually induced by immersive displays. The mismatch in visual and vestibular senses causes sickness. Previous solutions actively reduce user's field-of-view, introduce intruder in the view or alter their navigation. In this paper we propose a passive approach that partially simplify the virtual environment according to user navigation. One manual simplification approach has been proposed and prototyped to simplify the scene seen in the peripheral field of view. The optic flow is analyzed on the rendered images seen by users. The result shows that the simplification reduces the perceived optic flow which is the main cause of the visually induced motion sickness (VIMS). This pilot study confirm the potential efficiency of reducing cybersickness through geometric simplification.
SurfaceBrush and Brush2Model are two systems which enable users to create 3D objects intuitively using a hand-held controller in virtualreality (VR). These state-of-the-art methods either start modeling from dense co...
详细信息
ISBN:
(数字)9781665453653
ISBN:
(纸本)9781665453653
SurfaceBrush and Brush2Model are two systems which enable users to create 3D objects intuitively using a hand-held controller in virtualreality (VR). These state-of-the-art methods either start modeling from dense collections of stroke ribbons drawn by professional artists, or from the most basic point skeletons, line skeletons, and polygon skeletons. Thus, it is very challenging for novices and amateurs to design complex models efficiently. We propose 3D-BrushVR, a novel VR modeling tool that uses volume skeleton-based convolution surfaces. It enables the user to draw with arbitrarily shaped brushes and generate 3D manifold objects by fusing the brushed primitives. Unlike existing VR drawing and modeling tools, our approach can directly take some common but complex objects as primitives, and assemble them using implicit surfaces, thus providing a more flexible and powerful modeling ability. To achieve real-time performance, we introduce a new GPU-based method to calculate the volume fields of the resulting convolution surfaces. We also introduce several specially designed time-varying shaders to render the designed model for a better and more appealing modeling experience. We demonstrate the usability and modeling ability of our 3DBrushVR interface by comparing it with the state-of-the-art methods in an observational study. Experimental results further validate the effectiveness and flexibility of our approach.
Head mounted displays (HMDs) can provide users with an immersive virtualreality (VR) experience, but often are limited to viewing a single environment or data set at a time. In this paper, we describe a system of net...
详细信息
ISBN:
(纸本)9781665412988
Head mounted displays (HMDs) can provide users with an immersive virtualreality (VR) experience, but often are limited to viewing a single environment or data set at a time. In this paper, we describe a system of networked applications whereby co-located users in the real world can use a large-scale display wall to collaborate and share data with immersed users wearing HMDs. Our work focuses on the sharing of 360 degrees surround-view panoramic images and contextual annotations. The large-scale display wall affords non-immersed users the ability to view a multitude of contextual information and the HMDs afford the ability for users to immerse themselves in a virtual scene. The asymmetric virtualreality collaboration between immersed and non-immersed individuals can lead to deeper understanding and the feeling of a shared experience. We will highlight a series of use cases - two digital humanities projects that capture real locations using a 360 degrees camera, and one scientific discovery project that uses computer generated 360 degrees surround-view panoramas. In all cases, groups can benefit from both the immersive capabilities of HMDs and the collaborative affordances of large-scale display walls, and a unified experience is created for all users.
We envision a convenient telepresence system available to users anywhere, anytime. Such a system requires displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, ...
详细信息
ISBN:
(纸本)9781665418386
We envision a convenient telepresence system available to users anywhere, anytime. Such a system requires displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, we present a standalone real-time system for the dynamic 3D capture of a person, relying only on cameras embedded into a head-worn device, and on Inertial Measurement Units (IMUs) worn on the wrists and ankles. Our prototype system egocentrically reconstructs the wearer's motion via learning-based pose estimation, which fuses inputs from visual and inertial sensors that complement each other, overcoming challenges such as inconsistent limb visibility in head-worn views, as well as pose ambiguity from sparse IMUs. The estimated pose is continuously re-targeted to a prescanned surface model, resulting in a high-fidelity 3D reconstruction. We demonstrate our system by reconstructing various human body movements and show that our visual-inertial learning-based method, which runs in real time, outperforms both visual-only and inertial-only approaches. We captured an egocentric visual-inertial 3D human pose dataset publicly available at https://***/site/youngwooncha/egovip for training and evaluating similar methods.
暂无评论