The predominant interaction paradigm of current audio spatialization tools, which are primarily geared towards expert users, imposes a design process in which users are characterized as stationary, limiting the applic...
详细信息
ISBN:
(纸本)9781450349819
The predominant interaction paradigm of current audio spatialization tools, which are primarily geared towards expert users, imposes a design process in which users are characterized as stationary, limiting the application domain of these tools. Navigable 3D sonic virtual realities, on the other hand, can support many applications ranging from soundscape prototyping to spatial data representation. Although modern game engines provide a limited set of audio features to create such sonic environments, the interaction methods are inherited from the graphical design features of such systems, and are not specific to the auditory modality. To address such limitations, we introduce INVISO, a novel web-based userinterface for designing and experiencing rich and dynamic sonic virtual realities. Our interface enables both novice and expert users to construct complex immersive sonic environments with 3D dynamic sound components. INVISO is platform-independent and facilitates a variety of mixed reality applications, such as those where users can simultaneously experience and manipulate a virtual sonic environment. In this paper, we detail the interface design considerations for our audio-specific VR tool. To evaluate the usability of INVISO, we conduct two user studies: The first demonstrates that our visual interface effectively facilitates the generation of creative audio environments;the second demonstrates that both expert and non-expert users are able to use our software to accurately recreate complex 3D audio scenes.
The rise of Maker communities and open-source electronic prototyping platforms have made electronic circuit projects increasingly popular around the world. Although there are software tools that support the debugging ...
详细信息
ISBN:
(纸本)9781450349819
The rise of Maker communities and open-source electronic prototyping platforms have made electronic circuit projects increasingly popular around the world. Although there are software tools that support the debugging and sharing of circuits, they require users to manually create the virtual circuits in software, which can be time-consuming and error-prone. We present CircuitSense, a system that automatically recognizes the wires and electronic components placed on breadboards. It uses a combination of passive sensing and active probing to detect and generate the corresponding circuit representation in software in real-time. CircuitSense bridges the gap between the physical and virtual representations of circuits. It enables users to interactively construct and experiment with physical circuits while gaining the benefits of using software tools. It also dramatically simplifies the sharing of circuit designs with online communities.
We present iSphere, a flying spherical display that can display high resolution and bright images in all directions from anywhere in 3D space. Our goal is to build a new platform which can physically and directly emer...
详细信息
ISBN:
(纸本)9781450349819
We present iSphere, a flying spherical display that can display high resolution and bright images in all directions from anywhere in 3D space. Our goal is to build a new platform which can physically and directly emerge arbitrary bodies in the real world. iSphere flies by itself using a built-in drone and creates a spherical display by rotating arcuate multi light-emitting diode (LED) tapes around the drone. As a result of the persistence of human vision, we see it as a spherical display flying in the sky. The proposed method yields large display surfaces, high resolution, drone mobility, high visibility and 360 degrees field of view. Previous approaches fail to match these characteristics, because of problems with aerodynamics and payload. We construct a prototype and validate the proposed method. The unique characteristics and benefits of flying spherical display surfaces are discussed and we describe application scenarios based on iSphere such as guidance, signage and telepresence.
Graphical userinterfaces (GUI) on mobiles involves user interaction of touch input on a 2D surface. With advances in Augmented/Virtual Reality, possibilities of 3D GUIs will emerge. However, 3D GUIs do not have many ...
详细信息
ISBN:
(纸本)9781450355483
Graphical userinterfaces (GUI) on mobiles involves user interaction of touch input on a 2D surface. With advances in Augmented/Virtual Reality, possibilities of 3D GUIs will emerge. However, 3D GUIs do not have many design heuristics. This paper reports an experiment by collating quantitative and qualitative responses from 15 users, to explore usability problems that are likely to be encountered when a 2D interface element such as number keypad is replaced with a 3D element interface in Virtual reality. Would an interface with 3D elements perform better than the existing 2D GUIs is a moot research question? The results indicate user motivation towards using the interface inspired from 3D elements. The paper discusses issues of interaction in 2D and 3D virtual spaces with their possible implications for upcoming 3D VR environments.
Ungrounded haptic devices for virtual reality (VR) applications lack the ability to convincingly render the sensations of a grasped virtual object's rigidity and weight. We present Grabity, a wearable haptic devic...
详细信息
ISBN:
(纸本)9781450349819
Ungrounded haptic devices for virtual reality (VR) applications lack the ability to convincingly render the sensations of a grasped virtual object's rigidity and weight. We present Grabity, a wearable haptic device designed to simulate kinesthetic pad opposition grip forces and weight for grasping virtual objects in VR. The device is mounted on the index finger and thumb and enables precision grasps with a wide range of motion. A unidirectional brake creates rigid grasping force feedback. Two voice coil actuators create virtual force tangential to each finger pad through asymmetric skin deformation. These forces can be perceived as gravitational and inertial forces of virtual objects. The rotational orientation of the voice coil actuators is passively aligned with the real direction of gravity through a revolute joint, causing the virtual forces to always point downward. This paper evaluates the performance of Grabity through two user studies, finding promising ability to simulate different levels of weight with convincing object rigidity. The first user study shows that Grabity can convey various magnitudes of weight and force sensations to users by manipulating the amplitude of the asymmetric vibration. The second user study shows that users can differentiate different weights in a virtual environment using Grabity.
Eye contact is an important non-verbal cue in social signal processing and promising as a measure of overt attention in human-object interactions and attentive userinterfaces. However, robust detection of eye contact...
详细信息
ISBN:
(纸本)9781450349819
Eye contact is an important non-verbal cue in social signal processing and promising as a measure of overt attention in human-object interactions and attentive userinterfaces. However, robust detection of eye contact across different users, gaze targets, camera positions, and illumination conditions is notoriously challenging. We present a novel method for eye contact detection that combines a state-of-the-art appearance based gaze estimator with a novel approach for unsupervised gaze target discovery, i.e. without the need for tedious and time-consuming manual data annotation. We evaluate our method in two real-world scenarios: detecting eye contact at the workplace, including on the main work display, from cameras mounted to target objects, as well as during everyday social interactions with the wearer of a head-mounted egocentric camera. We empirically evaluate the performance of our method in both scenarios and demonstrate its effectiveness for detecting eye contact independent of target object type and size, camera position, and user and recording environment.
We propose a novel interactive system to simplify the process of indoor 3D CAD room modeling. Traditional room modeling methods require users to measure room and furniture dimensions, and manually select models that m...
详细信息
ISBN:
(纸本)9781450349819
We propose a novel interactive system to simplify the process of indoor 3D CAD room modeling. Traditional room modeling methods require users to measure room and furniture dimensions, and manually select models that match the scene from large catalogs. users then employ a mouse and keyboard interface to construct walls and place the objects in their appropriate locations. In contrast, our system leverages the sensing capabilities of a 3D aware mobile device, recent advances in object recognition, and a novel augmented reality userinterface, to capture indoor 3D room models in-situ. With a few taps, a user can mark the surface of an object, take a photo, and the system retrieves and places a matching 3D model into the scene, from a large online database. user studies indicate that this modality is significantly quicker, more accurate, and requires less effort than traditional desktop tools.
Tutorials are vital for helping people perform complex software-based tasks in domains such as programming, data science, system administration, and computational research. However, it is tedious to create detailed st...
详细信息
ISBN:
(纸本)9781450349819
Tutorials are vital for helping people perform complex software-based tasks in domains such as programming, data science, system administration, and computational research. However, it is tedious to create detailed step-by-step tutorials for tasks that span multiple interrelated GUI and command line applications. To address this challenge, we created Torta, an end-to-end system that automatically generates step-by-step GUI and command-line app tutorials by demonstration, provides an editor to trim, organize, and add validation criteria to these tutorials, and provides a web-based viewer that can validate step-level progress and automatically run certain steps. The core technical insight that underpins Torta is that combining operating-system-wide activity tracing and screencast recording makes it easier to generate mixed-media (text+video) tutorials that span multiple GUI and command line apps. An exploratory study on 10 computer science teaching assistants (TAs) found that they all preferred the experience and results of using Torta to record programming and sysadmin tutorials relevant to classes they teach rather than manually writing tutorials. A follow-up study on 6 students found that they all preferred following the Torta tutorials created by those TAs over the manually-written versions.
Most of our daily activities take place in the physical world, which inherently imposes physical constraints. In contrast, the digital world is very flexible, but usually isolated from its physical counterpart. To com...
详细信息
ISBN:
(纸本)9781450349819
Most of our daily activities take place in the physical world, which inherently imposes physical constraints. In contrast, the digital world is very flexible, but usually isolated from its physical counterpart. To combine these two realms, many Mixed Reality (MR) techniques have been explored, at different levels in the continuum. In this work we present an integrated Mixed Reality ecosystem that allows users to incrementally transition from pure physical to pure virtual experiences in a unique reality. This system stands on a conceptual framework composed of 6 levels. This paper presents these levels as well as the related interaction techniques.
We present EDITalk, a novel voice-based, eyes-free word processing interface. We used a Wizard-of-Oz elicitation study to investigate the viability of eyes-free word processing in the mobile context and to elicit user...
详细信息
暂无评论