the webXR device API allows the creation of web browser-based eXtended Reality (XR) applications, i.e. Virtual Reality (VR) and Augmented Reality (AR) applications, by providing access to input and output capabilities...
详细信息
ISBN:
(纸本)9781450390958
the webXR device API allows the creation of web browser-based eXtended Reality (XR) applications, i.e. Virtual Reality (VR) and Augmented Reality (AR) applications, by providing access to input and output capabilities from AR and VR devices. webXR applications can be experiencedthrough the webXR supported browsers of standalone and PC-connected VR headsets, AR headsets, mobile devices with or without headsets, and personal computers or desktops. webXR has been growing in popularity since its introduction due to the several benefits it promises such as allowing creators to utilise webGL's rich development ecosystem;create cross-platform and future proof XR applications;and create applications that can be experienced in VR or AR with minimal code changes. However, research has not been conducted to understandthe experiences of webXR creators. To address this gap, we conducted a qualitative study that involved interviews with 11 webXR creators withdiverse backgrounds aimed at understanding their experiences, and in this paper, we present 8 key challenges reported by these creators.
the Extensible 3d (X3d) Graphics Architecture ISO/IEC 19775-1 international Standard version 4.0 is a major release that includes extensive support for version 2.0 of glTF (glTF2), a standard file format for 3d scenes...
详细信息
ISBN:
(纸本)9781450390958
the Extensible 3d (X3d) Graphics Architecture ISO/IEC 19775-1 international Standard version 4.0 is a major release that includes extensive support for version 2.0 of glTF (glTF2), a standard file format for 3d scenes, and models supporting geometry, appearance, a scene-graph hierarchical structure and model animation. X3d version 4 (X3d4) authors have the option to Inline (i.e. load) glTF models directly or else utilize native X3d nodes to create corresponding 3d models. Physically-based rendering (PBR) and non-photorealistic rendering (NPR), with corresponding lighting techniques, are each important additions to the X3d4 rendering model. these lighting models are compatible and can coexist in real-time with legacy X3d3 and VRML97 models (which utilize classic Phong shading and texturing) either independently or composed together into a single scene. the X3d4 approach to representing glTF2 characteristics is defined in complete detail in order to be functionally identical, thus having the potential to achieve the greatest possible interoperability of 3d models across the World Wide web. Nevertheless, a persistent problem remains in that glTF renderers do not always appear to produce visually identical results. Best practices for mapping glTF structures to X3d4 nodes are emerging to facilitate consistent conversion. this paper describes a variety of techniques to help confirm rendering consistency of X3d4 players, both when loading glTF assets and when representing native X3d4 conversions. development of an X3d4 conformance archive corresponding to the publicly available glTF examples archive is expected to reinforce the development of visually correct software renderers capable of identical X3d4 and glTF presentation.
Upper limb segmentation in egocentric vision is a challenging and nearly unexplored task that extends the well-known hand localization problem and can be crucial for a realistic representation of users39; limbs in i...
详细信息
ISBN:
(纸本)9781450390958
Upper limb segmentation in egocentric vision is a challenging and nearly unexplored task that extends the well-known hand localization problem and can be crucial for a realistic representation of users' limbs in immersive and interactive environments, such as VR/MR applications designed for web browsers that are a general-purpose solution suitable for any device. Existing hand and arm segmentation approaches require a large amount of well-annotateddata. then different annotation techniques were designed, and several datasets were created. Such datasets are often limited to synthetic and semi-synthetic data that do not include the whole limb anddiffer significantly from real data, leading to poor performance in many realistic cases. To overcome the limitations of previous methods andthe challenges inherent in both egocentric vision and segmentation, we trained several segmentation networks based on the state-of-the-art deepLabv3+ model, collecting a large-scale comprehensive dataset. It consists of 46 thousand real-life and well-labeled RGB images with a great variety of skin colors, clothes, occlusions, and lighting conditions. In particular, we carefully selectedthe best data from existing datasets and added our EgoCam dataset, which includes new images with accurate labels. Finally, we extensively evaluatedthe trained networks in unconstrained real-world environments to findthe best model configuration for this task, achieving promising and remarkable results in diverse scenarios. the code, the collected egocentric upper limb segmentation dataset, and a video demo of our work will be available on the project page(1).
Extended reality (XR) collaboration enables collaboration between physical and virtual spaces. Recent XR collaboration studies have focused on sharing and understanding the overall situation of the objects of interest...
详细信息
ISBN:
(纸本)9781450390958
Extended reality (XR) collaboration enables collaboration between physical and virtual spaces. Recent XR collaboration studies have focused on sharing and understanding the overall situation of the objects of interest (OOIs) and its surrounding ambient objects (AOs) rather than simply recognizing the existence of OOI. the sharing of the overall situation is achieved using three-dimensional (3d) models that replicate objects existing in the physical workspace. there are two approaches for creating the models: pre-reconstruction and real-time reconstruction. the pre-reconstruction approach takes considerable time to create polygon meshes precisely, andthe real-time reconstruction approach requires a considerable time to install numerous sensors to perform accurate 3d scanning. In addition, these approaches are difficult to be used on the collaboration in a location beyondthe reconstructed space, making them impractical to an actual XR collaboration. the approach proposed in this study separates the objects that form the physical workspace into OOI and AO, models only the OOI as a polygon mesh in advance, and reconstructs the AO into a point cloud using light detection and ranging technology for collaboration. the reconstructed point cloud is shared with remote collaborators through webRTC, a web-based peer-to-peer networking technology with low latency. Each remote collaborator collects the delivered point cloud to form a virtual space, so that they can intuitively understandthe situation at a local site. Because our approach does not create polygon meshes for all objects existing at the local site, we can save time to prepare
暂无评论