Personal informatics (PI) systems are widely used in various domains such as mental health to provide insights from self-tracking data for behavior change. Users are highly interested in examining relationships from t...
详细信息
ISBN:
(纸本)9798400703300
Personal informatics (PI) systems are widely used in various domains such as mental health to provide insights from self-tracking data for behavior change. Users are highly interested in examining relationships from the self-tracking data, but identifying causality is still considered challenging. In this study, we design DeepStress, a PI system that helps users analyze contextual factors causally related to stress. DeepStress leverages a quasi-experimental approach to address potential biases related to confounding factors. To explore the user experience of DeepStress, we conducted a user study and a follow-up diary study using participants' own self-tracking data collected for 6 weeks. Our results show that DeepStress helps users consider multiple contexts when investigating causalities and use the results to manage their stress in everyday life. We discuss design implications for causality support in PI systems.
Increased levels of user control in learning systems is commonly cited as good AI development practice. However, the evidence as to the effect of perceived control over trust in these systems is mixed. This study inve...
详细信息
The proceedings contain 1056 papers. The topics discussed include: unlock life with a Chat(GPT): integrating conversational ai with large language models into everyday lives of autistic individuals;design with rural-t...
ISBN:
(纸本)9798400703300
The proceedings contain 1056 papers. The topics discussed include: unlock life with a Chat(GPT): integrating conversational ai with large language models into everyday lives of autistic individuals;design with rural-to-urban migrant women: opportunities and challenges in designing within a precarious marriage context in South china;FLUID-IoT : flexible and granular access control in shared IoT environments via-UI-level control distribution;was it real or virtual? confirming the occurrence and explaining causes of memory source confusion between reality and virtual reality;investigating contextual notifications to drive self-monitoring in miHealth apps for weight maintenance;charting ethical tensions in multispecies technology research through beneficiary-epistemology space;and odds and insights: decision quality in exploratory data analysis under uncertainty.
This paper describes an interface that enables experts to communicate with a virtual robot in a simulated environment via natural language, and to visualize the robot's knowledge representation for them for inspec...
详细信息
Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We ...
详细信息
ISBN:
(纸本)9798400703300
Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We present findings from interviews with 17 people in the U.S. public defense community to understand their perceptions of and experiences scrutinizing computational forensic software (CFS) - automated decision systems that the government uses to convict and incarcerate, such as facial recognition, gunshot detection, and probabilistic genotyping tools. We find that our participants faced challenges assessing and contesting CFS reliability due to difficulties (a) navigating how CFS is developed and used, (b) overcoming judges and jurors' non-critical perceptions of CFS, and (c) gathering CFS expertise. To conclude, we provide recommendations that center the technical, social, and institutional context to better position interventions such as performance evaluations to support contestability in practice.
Artificial intelligence (AI) presents new challenges for the user experience (UX) of products and services. Recently, practitioner-facing resources and design guidelines have become available to ease some of these cha...
详细信息
ISBN:
(纸本)9781450394215
Artificial intelligence (AI) presents new challenges for the user experience (UX) of products and services. Recently, practitioner-facing resources and design guidelines have become available to ease some of these challenges. However, little research has investigated if and how these guidelines are used, and how they impact practice. In this paper, we investigated how industry practitioners use the People + AI Guidebook. We conducted interviews with 31 practitioners (i.e., designers, product managers) to understand how they use human-AI guidelines when designing AI-enabled products. Our findings revealed that practitioners use the guidebook not only for addressing AI's design challenges, but also for education, cross-functional communication, and for developing internal resources. We uncovered that practitioners desire more support for early phase ideation and problem formulation to avoid AI product failures. We discuss the implications for future resources aiming to help practitioners in designing AI products.
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this p...
详细信息
ISBN:
(纸本)9781450394215
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
Despite the proliferation of research on how people engage with and experience algorithmic systems, the materiality and physicality of these experiences is often overlooked. We tend to forget about bodies. The Embodyi...
详细信息
Algorithmic decision-making is increasingly being adopted across public higher education. The expansion of data-driven practices by post-secondary institutions has occurred in parallel with the adoption of New Public ...
详细信息
ISBN:
(纸本)9798400703300
Algorithmic decision-making is increasingly being adopted across public higher education. The expansion of data-driven practices by post-secondary institutions has occurred in parallel with the adoption of New Public Management approaches by neoliberal administrations. In this study, we conduct a qualitative analysis of an in-depth ethnographic case study of data and algorithms in use at a public college in Ontario, Canada. We identify the data, algorithms, and outcomes in use at the college. We assess how the college's processes and relationships support those outcomes and the different stakeholders' perceptions of the college's data-driven systems. In addition, we find that the growing reliance on algorithmic decisions leads to increased student surveillance, exacerbation of existing inequities, and the automation of the faculty-student relationship. Finally, we identify a cycle of increased institutional power perpetuated by algorithmic decision-making, and driven by a push towards financial sustainability.
We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e.g. "Once upon a time, I saw a fox..."), and (2) non-d...
详细信息
ISBN:
(纸本)9781450394215
We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e.g. "Once upon a time, I saw a fox..."), and (2) non-diegetic prompts (external, e.g. "Write about the adventures of the fox."). With this lens, we study how 129 crowd workers on Prolific write short texts with different user interfaces (1 vs 3 suggestions, with/out non-diegetic prompts;implemented with GPT-3): When the interface offered multiple suggestions and provided an option for non-diegetic prompting, participants preferred choosing from multiple suggestions over controlling them via non-diegetic prompts. When participants provided non-diegetic prompts it was to ask for inspiration, topics or facts. Single suggestions in particular were guided both with diegetic and non-diegetic information. This work informs human-AI interaction with generative models by revealing that (1) writing non-diegetic prompts requires effort, (2) people combine diegetic and non-diegetic prompting, and (3) they use their draft (i.e. diegetic information) and suggestion timing to strategically guide LLMs.
暂无评论