Advances in artificial intelligence are increasingly leading to the automation and augmentation of decision processes in work contexts. Although research originally generally focused upon decision-makers, the perspect...
详细信息
Advances in artificial intelligence are increasingly leading to the automation and augmentation of decision processes in work contexts. Although research originally generally focused upon decision-makers, the perspective of those targeted by automated or augmented decisions (whom we call "second parties") and parties who observe the effects of such decisions (whom we call "third parties") is now growing in importance and attention. We review the expanding literature investigating reactions to automated and augmented decision-making by second and third parties. Specifically, we explore attitude (e.g., evaluations of trustworthiness), perception (e.g., fairness perceptions), and behavior (e.g., reverse engineering of automated decision processes) outcomes of second and third parties. Additionally, we explore how characteristics of the a) decision-making process, b) system, c) second and third party, d) task, and e) outputs and outcomes moderate these effects, and provide recommendation for future research. Our review summarizes the state of the literature in these domains, concluding a) that reactions to automated decisions differ across situations in which there is remaining human decision control (i.e., augmentation contexts), b) that system design choices (e.g., transparency) are important but underresearched, and c) that the generalizability of findings might suffer from excessive reliance on specific research methodologies (e.g., vignette studies).
The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague's District Court judgment in a case challenging ...
详细信息
The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague's District Court judgment in a case challenging the Dutch government's use of System Risk Indication-an algorithm designed to identify potential social welfare fraud. Digital welfare state initiatives are likely to fall short of meeting basic requirements of legality and protecting against arbitrariness. Moreover, the intentional opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. The analysis unpacks the relevance and complementarity of three legal/regulatory frameworks governing algorithmic systems: data protection, human rights law and algorithmic accountability. Notwithstanding these frameworks' invaluable contribution, the discussion casts doubt on whether they are well-suited to address the legal challenges pertaining to the discriminatory effects of the use of algorithmic systems.
There is an increasing body of research on algorithmic management (AM), but the field lacks measurement tools to capture workers' experiences of this phenomenon. Based on existing literature, we developed and vali...
详细信息
There is an increasing body of research on algorithmic management (AM), but the field lacks measurement tools to capture workers' experiences of this phenomenon. Based on existing literature, we developed and validated the algorithmic management questionnaire (AMQ) to measure the perceptions of workers regarding their level of exposure to AM. Across three samples (overall n = 1332 gig workers), we show the content, factorial, discriminant, convergent, and predictive validity of the scale. The final 20-item scale assesses workers' perceived level of exposure to algorithmic: monitoring, goal setting, scheduling, performance rating, and compensation. These dimensions formed a higher order construct assessing overall exposure to algorithmic management, which was found to be, as expected, negatively related to the work characteristics of job autonomy and job complexity and, indirectly, to work engagement. Supplementary analyses revealed that perceptions of exposure to AM reflect the objective presence of AM dimensions beyond individual variations in exposure. Overall, the results suggest the suitability of the AMQ to assess workers' perceived exposure to algorithmic management, which paves the way for further research on the impacts of these rapidly accelerating systems.
Algorithms are now routinely used in decision-making;they are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although use of algorithms has m...
详细信息
Algorithms are now routinely used in decision-making;they are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although use of algorithms has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains where safety and fairness are important. Awareness of these problems has generated public discourse calling for algorithmic accountability. However, the current discourse focuses largely on algorithms and their opacity. I argue that this reflects a narrow and inadequate understanding of accountability. I sketch an account of accountability that takes accountability to be a social practice constituted by actors, forums, shared beliefs and norms, performativity, and sanctions, and aimed at putting constraints on the exercise of power. On this account, algorithmic accountability is not yet constituted;it is in the making. The account brings to light a set of questions that must be addressed to establish it.
Points of InterestComputer-based tools, or algorithms, are increasingly being used to assess who is eligible for public disability *** research investigates disabled people's concerns about the use of these tools....
详细信息
Points of InterestComputer-based tools, or algorithms, are increasingly being used to assess who is eligible for public disability *** research investigates disabled people's concerns about the use of these tools. It uses a case study from Australia, where algorithmic assessments were recently proposed and trialled by the *** about technology's impact on resource allocation in disability support regimes include insensitivity to individual lived realities and reduction of disability to a score of bodily *** avoid harm to disabled people, the research recommends an approach to assessment that better addresses contextual factors and lived experiences of disability. This article examines the impact of algorithmic systems on disabled people's interactions with social services, focusing on a case study of algorithmic decision-making in Australia's National Disability Insurance Scheme (NDIS). Through interviews and document analysis, we explore future visions and concerns related to increased automation in NDIS planning and eligibility determinations. We show that while individuals may not fully comprehend the inner workings of algorithmic systems, they develop their own understandings and judgments, shedding light on how power operates through the datafication of disability. The article highlights the significance of addressing epistemic justice concerns, urging a reevaluation of dominant modes of understanding and assessing disability through algorithmic categorisation, while advocating for more nuanced approaches that acknowledge disability's embeddedness in social relations. The findings have implications for the future use of algorithmic decision-making in the NDIS and disability welfare provision more broadly.
This study explores how government-adopted audit data analytic tools promote the abuse of power by auditors enabling politically sensitive processes that encourage industry-wide normalization of behavior. In an audit ...
详细信息
This study explores how government-adopted audit data analytic tools promote the abuse of power by auditors enabling politically sensitive processes that encourage industry-wide normalization of behavior. In an audit setting, we investigate how a governmental organization enables algorithmic decision-making to alter power relationships to effect organizational and industry-wide change. While prior research has identified discriminatory threats emanating from the deployment of algorithmic decision-making, the effects of algorithmic decision-making on inherently imbalanced power relationships have received scant attention. Our results provide empirical evidence of how systemic and episodic power relationships strengthen each other, thereby enabling the governmental organization to effect social change that might be too politically prohibitive to enact directly. Overall, the results suggest that there are potentially negative effects caused by the use of algorithmic decision-making and the resulting power shifts, and these effects create a different view of the level of purported success attained through auditor use of data analytics.
Computer algorithms, the logic and code that power automated decision-making programs, increasingly dominate many aspects of modern society. There are already many examples of institutional biases - including ideologi...
详细信息
Computer algorithms, the logic and code that power automated decision-making programs, increasingly dominate many aspects of modern society. There are already many examples of institutional biases - including ideological bias, racism, sexism, ableism being solidified in algorithms, causing harm to already underprivileged populations. This article explores library-specific and society-wide examples as well as efforts to prevent the implementation of these biases in the future.
Over the past decade, artificial intelligence (AI) technologies have transformed numerous facets of our lives. In this article, we summarize key themes in emerging AI research in behavioral science. In doing so, we ai...
详细信息
Over the past decade, artificial intelligence (AI) technologies have transformed numerous facets of our lives. In this article, we summarize key themes in emerging AI research in behavioral science. In doing so, we aim to unravel the multifaceted impacts of AI on people's emotions, cognition, and behaviors, offering nuanced insights into this rapidly evolving landscape. This article concludes with proposing promising avenues for future research, outlining areas for further exploration and methodological approaches to consider.
Big data and data science transform organizational decision-making. We increasingly defer decisions to algorithms because machines have earned a reputation of outperforming us. As algorithms become embedded within org...
详细信息
Big data and data science transform organizational decision-making. We increasingly defer decisions to algorithms because machines have earned a reputation of outperforming us. As algorithms become embedded within organizations, they become more influential and increasingly opaque. Those who create algorithms may make arbitrary decisions in all stages of the ?data value chain?, yet these subjectivities are obscured from view. Algorithms come to reflect the biases of their creators, can reinforce established ways of thinking, and may favour some political orientations over others. This is a cause for concern and calls for more transparency in the development, implementation, and use of algorithms in public- and private-sector organizations. We argue that one elementary ? yet key ? question remains largely undiscussed. If transparency is a primary concern, then to whom should algorithms be transparent? We consider algorithms as socio-technical assemblages and conclude that without a critical audience, algorithms cannot be held accountable.
Research suggests that people prefer human over algorithmicdecision-makers at work. Most of these studies, however, use hypothetical scenarios and it is unclear whether such results replicate in more realistic contex...
详细信息
Research suggests that people prefer human over algorithmicdecision-makers at work. Most of these studies, however, use hypothetical scenarios and it is unclear whether such results replicate in more realistic contexts. We conducted two between-subjects studies (N=270;N=183) in which the decision-maker (human vs. algorithmic, Study 1 and 2), explanations regarding the decision- process (yes vs. no, Study 1 and 2), and the type of selection test (requiring human vs. mechanical skills for evaluation, Study 2) were manipulated. While Study 1 was based on a hypothetical scenario, participants in pre-registered Study 2 volunteered to participate in a qualifying session for an attractively remunerated product test, thus competing for real incentives. In both studies, participants in the human condition reported higher levels of trust and acceptance. Providing explanations also positively influenced trust, acceptance, and perceived transparency in Study 1, while it did not exert any effect in Study 2. Type of the selection test affected fairness ratings, with higher ratings for tests requiring human vs. mechanical skills for evaluation. Results show that algorithmic decision-making in personnel selection can negatively impact trust and acceptance both in studies with hypothetical scenarios as well as studies with real incentives.
暂无评论