A large body of research has attempted to ensure that algorithmic systems adhere to notions of fairness and transparency. Increasingly, researchers have highlighted that mitigating algorithmic harms requires explicitl...
详细信息
ISBN:
(纸本)9798400704505
A large body of research has attempted to ensure that algorithmic systems adhere to notions of fairness and transparency. Increasingly, researchers have highlighted that mitigating algorithmic harms requires explicitly taking power structures into account. Those with power over algorithmic systems often fail to sufficiently address algorithmic harms and rarely consult those directly harmed by algorithmic systems. Left to their own devices, people respond to algorithmic harms they encounter in a wide variety of ways, but we lack broader, overarching understandings of these responses. In this work, we synthesize documented, historical cases into a taxonomy of responses "from below" to algorithmic harm. Our taxonomy connects different types of responses to existing theorizations of power from fields including anthropology, human-computer interaction, and communication, centering how people employ, shift, and build power in their responses to algorithmic harm. Based on our taxonomy, we highlight an opportunity space for the FAccT community to engage with and support such action from below.
Power and information asymmetries between people and digital technology companies are further legitimized through contractual agreements that fail to provide meaningful consent and contestability. In particular, the T...
详细信息
Power and information asymmetries between people and digital technology companies are further legitimized through contractual agreements that fail to provide meaningful consent and contestability. In particular, the Terms-of-Service (ToS) agreement, is a contract of adhesion where companies effectively set the terms and conditions of the contract. Whereas, ToS reinforce existing structural inequalities, we seek to enable an intersectional accountability mechanism grounded in the practice of algorithmic reparation. Building on existing critiques of ToS in the context of algorithmic systems, we return to the roots of contract theory by recentering notions of agency and mutual assent. We evolve a multipronged intervention we frame as the Terms-we-Serve-with (TwSw) social, computational, and legal framework. The TwSw is a new social imaginary centered on: (1) co-constitution of user agreements, through participatory mechanisms;(2) addressing friction, leveraging the fields of design justice and critical design in the production and resolution of conflict;(3) enabling refusal mechanisms, reflecting the need for a sufficient level of human oversight and agency including opting out;(4) complaint and algorithmic harms reporting, through a feminist studies lens and open-sourced computational tools;and (5) disclosure-centered mediation, to disclose, acknowledge, and take responsibility for harm, drawing on the field of medical law. We further inform our analysis through an exploratory design workshop with a South African gender-based violence reporting AI startup. We derive practical strategies for communities, technologists, and policy-makers to leverage a relational approach to algorithmic reparation and propose there's a need for a radical restructuring of the "take-it-or-leave-it" ToS agreement.
algorithmic systems have infiltrated many aspects of our society, mundane to high-stakes, and can lead to algorithmic harms known as representational and allocative. In this paper, we consider what stigma theory illum...
详细信息
ISBN:
(纸本)9781450394215
algorithmic systems have infiltrated many aspects of our society, mundane to high-stakes, and can lead to algorithmic harms known as representational and allocative. In this paper, we consider what stigma theory illuminates about mechanisms leading to algorithmic harms in algorithmic assemblages. We apply the four stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) outlined in sociological stigma theories to algorithmic assemblages in two contexts : 1) "risk prediction" algorithms in higher education, and 2) suicidal expression and ideation detection on social media. We contribute the novel theoretical conceptualization of algorithmic stigmatization as a sociotechnical mechanism that leads to a unique kind of algorithmic harm: algorithmic stigma. Theorizing algorithmic stigmatization aids in identifying theoretically-driven points of intervention to mitigate and/or repair algorithmic stigma. While prior theorizations reveal how stigma governs socially and spatially, this work illustrates how stigma governs sociotechnically.
This article discusses the opportunities and costs of AI in behavioural science, with particular reference to consumer welfare. We argue that because of pattern detection capabilities, modern AI will be able to identi...
详细信息
This article discusses the opportunities and costs of AI in behavioural science, with particular reference to consumer welfare. We argue that because of pattern detection capabilities, modern AI will be able to identify (1) new biases in consumer behaviour and (2) known biases in novel situations in which consumers find themselves. AI will also allow behavioural interventions to be personalised and contextualised and thus produce significant benefits for consumers. Finally, AI can help behavioural scientists to "see the system," by enabling the creation of more complex and dynamic models of consumer behaviour. While these opportunities will significantly advance behavioural science and offer great promise to improve consumer outcomes, we highlight several costs of using AI. We focus on some important environmental, social, and economic costs that are relevant to behavioural science and its application. For consumers, some of those costs involve privacy;others involve manipulation of choices.
AI-based writing assistants are being integrated into a range of products and platforms. While direct users-who use AI writing assistants for various writing tasks-often get to decide how they integrate these systems ...
详细信息
ISBN:
(纸本)9798400710315
AI-based writing assistants are being integrated into a range of products and platforms. While direct users-who use AI writing assistants for various writing tasks-often get to decide how they integrate these systems in their writing process, the final outcomes (e.g., the writing artifact) will be influenced by the users' understanding of how these systems work, and by whether their generated output (e.g., suggestions) is accurate and matches their expectations. In this write-up, we examine the types of controls users believe they have-based on their understanding of the system-when using AI-based writing assistants (e.g., getting personalized suggestions) and whether they can effectively use these controls to minimize possible negative outcomes (e.g., poor writing quality). To do so, we examine users' mental models of writing assistants and how these models affect users' ability to intervene appropriately. We argue that more work is needed to examine the connection between different mental models and a user's ability to control these systems to minimize potential negative outcomes. To this end, we discuss an illustrative case study design where participants are asked to use an AI-based writing assistant to write a cover letter. We discuss how the results from this study could help us understand the impact that different mental models have on user reliance on AI-based writing assistants.
Despite increasing reliance on personalization in digital platforms, many algorithms that curate content or information for users have been met with resistance. When users feel dissatisfied or harmed by recommendation...
详细信息
ISBN:
(纸本)9781450392785
Despite increasing reliance on personalization in digital platforms, many algorithms that curate content or information for users have been met with resistance. When users feel dissatisfied or harmed by recommendations, this can lead users to hate, or feel negatively towards these personalized systems. algorithmic hate detrimentally impacts both users and the system, and can result in various forms of algorithmic harm, or in extreme cases can lead to public protests against "the algorithm" in question. In this work, we summarize some of the most common causes of algorithmic hate and their negative consequences through various case studies of personalized recommender systems. We explore promising future directions for the RecSys research community that could help alleviate algorithmic hate and improve the relationship between recommender systems and their users.
Recent work in HCI suggests that users can be powerful in surfacing harmful algorithmic behaviors that formal auditing approaches fail to detect. However, it is not well understood how users are often able to be so ef...
详细信息
ISBN:
(纸本)9781450391573
Recent work in HCI suggests that users can be powerful in surfacing harmful algorithmic behaviors that formal auditing approaches fail to detect. However, it is not well understood how users are often able to be so effective, nor how we might support more effective user-driven auditing. To investigate, we conducted a series of think-aloud interviews, diary studies, and workshops, exploring how users find and make sense of harmful behaviors in algorithmic systems, both individually and collectively. Based on our findings, we present a process model capturing the dynamics of and influences on users' search and sensemaking behaviors. We find that 1) users' search strategies and interpretations are heavily guided by their personal experiences with and exposures to societal bias;and 2) collective sensemaking amongst multiple users is invaluable in user-driven algorithm audits. We offer directions for the design of future methods and tools that can better support user-driven auditing.
暂无评论