Agents in VR have wide application like guidance. Most current agents are passive, so that people should suspend their current tasks and request agents with explicit demand. It is necessary to make agent more actively...
详细信息
ISBN:
(纸本)9781665484022
Agents in VR have wide application like guidance. Most current agents are passive, so that people should suspend their current tasks and request agents with explicit demand. It is necessary to make agent more actively open the interaction naturally but without being bothering. We propose a virtual guidance agent which provide voice explanation in appropriate timing, using gaze tracking. attention amount estimation and attention driven state machine. We used time-decayed moving average of angle between gaze direction and face front direction. We implemented the method in VR and evaluated effectiveness in virtual guiding tour experimentally.
Increased availability of cancerous genomics mutation data provides researchers the opportunity to discover associations among genetic mutation patterns within the same organ as well as similar mutation patterns among...
详细信息
ISBN:
(纸本)9781665484022
Increased availability of cancerous genomics mutation data provides researchers the opportunity to discover associations among genetic mutation patterns within the same organ as well as similar mutation patterns among different organs. However, the complexity, variety, and scale of multi-dimensional data involved in analyzing mutations across organs poses challenges for clinicians and researchers to draw such relationships. We present a prototype application that leverages multiple-coordinated views in mixed reality (MR) to enable investigations of genetic mutation patterns and the organs affected by cancer. We believe our prototype has the potential to enhance data and association discovery within and across different organs.
In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, ...
详细信息
ISBN:
(纸本)9781665484022
In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.
Linking and visualizing multiple types of entities in a DH knowledge graph generates the need to deal with multiple types of data and media modalities both on the designer and the user side. The InTaVia project develo...
详细信息
ISBN:
(数字)9781665476683
ISBN:
(纸本)9781665476683
Linking and visualizing multiple types of entities in a DH knowledge graph generates the need to deal with multiple types of data and media modalities both on the designer and the user side. The InTaVia project develops synoptic visual representations for a multimodal historical knowledge graph which draws together transnational data about cultural objects and historical actors. In this paper we reflect on the question how to integrate and mediate the informational and visual affordances of both kinds of cultural data with hybrid designs and show how a user-centered design process can help to ground the required selections and design choices in an empirical procedure.
For over a quarter century, GROUP has offered a premier yet intimate and welcoming venue for agenda-setting, diverse research. Although the traditional focus of the conference is on supporting group work, it has expan...
详细信息
For over a quarter century, GROUP has offered a premier yet intimate and welcoming venue for agenda-setting, diverse research. Although the traditional focus of the conference is on supporting group work, it has expanded to include research from computer-supported cooperative work, sociotechnical studies, practice-centeredcomputing, human-computer interaction, computersupported collaborative learning, participatory technology design, and other related areas. The work presented in this issue embodies that interdisciplinary ethos. Papers in this issue cover a wide range topics, from human-AI collaboration, to collaboration in virtual reality, to perceptions of privacy and security, to the myriad impacts of the COVID-19 pandemic. The application domains are similarly wide ranging, from health data, to civic engagement, to educational settings, to government provision of social services. Similar to the 2021 issue, this issue also continues the tradition of design fiction at GROUP. This issue of PACM:HCI brings you papers from the planned 2022 ACMConference on Supporting Group Work (GROUP 2022). Typically, the GROUP conference occurs every two years. However, research developments do not necessarily follow conference deadline cycles. Thus, the GROUP conference offers authors the opportunity to submit to multiple waves. The first wave of papers for this conference were published in July 2021 in Volume 5 of PACM:HCI, and papers from this current issue were first submitted in May 2021. Both of these sets of papers published as part of the planned GROUP 2022 conference were authored and reviewed during the COVID-19 pandemic. These papers represent commendable volumes of hard work and resilience, not just from the authors, but also from the reviewers, the program committee, and the conference organizers. Additionally, the pandemic forced a major change to the conference at which these papers will be presented.
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their...
详细信息
ISBN:
(纸本)9781665484022
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.
This paper investigates the use of AI features - intelligent attributes in products - in the workplace with enterprise users who engage with AI enabled systems through a variety of touchpoints. Often-times, product te...
详细信息
ISBN:
(纸本)9781450391566
This paper investigates the use of AI features - intelligent attributes in products - in the workplace with enterprise users who engage with AI enabled systems through a variety of touchpoints. Often-times, product teams developing AI features face a siloed view of AI experiences, and this work aims to present an end-to-end understanding of the range of enterprise users and their experiences when interacting with AI in the workplace. The purpose is to identify the phases in the AI feature journey for enterprise users across their spectrum of experiences, perceptions, and technical acumen. This paper presents this journey of enterprise users working with AI features, analyzes existing challenges and opportunities within this journey, and proposes recommendations to address these areas when planning, designing, and developing AI features for business applications.
Light-based adversarial attacks use spatial augmented reality (SAR) techniques to fool image classifiers by altering the physical light condition with a controllable light source, e.g., a projector. Compared with phys...
详细信息
ISBN:
(纸本)9781665496179
Light-based adversarial attacks use spatial augmented reality (SAR) techniques to fool image classifiers by altering the physical light condition with a controllable light source, e.g., a projector. Compared with physical attacks that place hand-crafted adversarial objects, projector-based ones obviate modifying the physical entities, and can be performed transiently and dynamically by altering the projection pattern. However, subtle light perturbations are insufficient to fool image classifiers, due to the complex environment and project-and-capture process. Thus, existing approaches focus on projecting clearly perceptible adversarial patterns, while the more interesting yet challenging goal, stealthy projector-based attack, remains open. In this paper, for the first time, we formulate this problem as an end-to-end differentiable process and propose a Stealthy Projector-based Adversarial Attack (SPAA) solution. In SPAA, we approximate the real Project-and-Capture process using a deep neural network named PCNet, then we include PCNet in the optimization of projector-based attacks such that the generated adversarial projection is physically plausible. Finally, to generate both robust and stealthy adversarial projections, we propose an algorithm that uses minimum perturbation and adversarial confidence thresholds to alternate between the adversarial loss and stealthiness loss optimization. Our experimental evaluations show that SPAA clearly outperforms other methods by achieving higher attack success rates and meanwhile being stealthier, for both targeted and untargeted attacks.
Immersive virtual reality (VR) simulations become more and more popular for basic nursing skills training in a realistic VR environment. However, there is still a gap of research that specifically focuses on pediatric...
详细信息
ISBN:
(数字)9781665453257
ISBN:
(纸本)9781665453257
Immersive virtual reality (VR) simulations become more and more popular for basic nursing skills training in a realistic VR environment. However, there is still a gap of research that specifically focuses on pediatric nursing interventions where nurse trainees should be able to recognize and deal with the patient parent's emotional responses appropriately. In this paper, we propose a novel nursing intervention analysis framework that evaluates the user's nursing performance, by analyzing not only their momentary multimodal (verbal and nonverbal) behaviors, but also accumulated intervention behaviors that capture the overall nursing context. Based on the proposed framework, we developed an immersive VR-based nursing education system with an emotionally responsive virtual parent, and designed a realistic pediatric nursing intervention scenario in collaboration with a subject-matter expert (a nursing faculty in our university). An expert-evaluation study with professional nurses was conducted to assess the potential of the developed system-in particular, the effects of the emotionally responsive virtual parent-as an effective education tool, by comparing with an emotionally static/neutral virtual parent. Several factors of learning experience, e.g., immersion, realism, learning efficacy, and usefulness, were examined through a subjective questionnaire, and the results support the effectiveness of our system in clinical practice or educational settings. We discuss the findings and implications of our research with the nurse participants' qualitative feedback, while also addressing possible limitations and future research directions.
Networked Musical Collaboration requires near-instantaneous network transmission for successful real-time collaboration. We studied the way changes in network latency affect participants' auditory and visual perce...
详细信息
ISBN:
(纸本)9781665484022
Networked Musical Collaboration requires near-instantaneous network transmission for successful real-time collaboration. We studied the way changes in network latency affect participants' auditory and visual perception in latency detection, as well as latency tolerance in AR. Twenty-four participants were asked to play a hand drum with a prerecorded remote musician rendered as an avatar in AR at different levels of audio-visual latency. We analyzed the subjective responses of the participants from each session. Results suggest a minimum noticeable delay value between 160 milliseconds (ms) and 320 ms, as well as no upper limit to audio-visual delay tolerance.
暂无评论