While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making i...
详细信息
While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on 'algorithmic aversion' in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across three high-powered (N-total = 1192), preregistered 2 (agent: algorithm/human) x 2 (benefits: high/low) x 2 (control: user control/no control) between-subjects experiments, and two domains (finance and dating), the results were quite consistent: there is little evidence for a default aversion against algorithms and in favor of human decision makers. Instead, users accept or reject decisions and decisional agents based on their predicted benefits and the ability to exercise control over the decision.
AI can make mistakes and cause unfavorable consequences. It is important to know how people react to such AI-driven negative consequences and subsequently evaluate the fairness of AI's decisions. This study theori...
详细信息
AI can make mistakes and cause unfavorable consequences. It is important to know how people react to such AI-driven negative consequences and subsequently evaluate the fairness of AI's decisions. This study theorizes and empirically tests two psychological mechanisms that explain the process: (a) heuristic expectations of AI's consistent performance (automation bias) and subsequent frustration of unfulfilled expectations (algorithmic aversion) and (b) heuristic perceptions of AI's controllability over negative results. Our findings from two experimental studies reveal that these two mechanisms work in an opposite direction. First, participants tend to display more sensitive responses to AI's inconsistent performance and thus make more punitive assessments of AI's decision fairness, when compared to responses to human experts. Second, as participants perceive AI has less control over unfavorable outcomes than human experts, they are more tolerant in their assessments of AI. Lay Summary As artificial intelligence (AI) is replacing important decisions that used to be made by human experts, it is important to study how people react to undesirable outcomes caused by AI-made decisions. This study aims to identify two critical psychological processes that explain how people evaluate AI-driven failures. The first mechanism is that people have high expectations of AI's consistent performance (called automation bias) and then are frustrated by unsatisfactory outcomes (called algorithmic aversion). The second mechanism is that people perceive that AI has less control over negative outcomes, compared to humans, which in turn, reduces negative evaluations of AI. To demonstrate these two ideas, we used two online experiments. Participants were exposed to several scenarios where they experienced undesirable outcomes from either AI or human experts.
The human resource management (HRM) function has witnessed the rapid integration of algorithms into incumbent processes;however, significant employee resistance and aversion to algorithmic decision-making have also be...
详细信息
The human resource management (HRM) function has witnessed the rapid integration of algorithms into incumbent processes;however, significant employee resistance and aversion to algorithmic decision-making have also been reported. Research on algorithmic HRM practices indicates an underlying duality of perceptual responses by HRM professionals towards this technology. We seek to understand how HRM professionals experience algorithmic HRM use and determine if there are bright sides to its organizational integration. We undertake a qualitative, open-ended study based on written responses to open-ended questions from 58 respondents in the United Kingdom and the United States of America. The data were thematically analyzed using grounded theory, which revealed four themes representing HRM professionals' overarching perspectives on why algorithmic HRM precipitates aversion or appreciation. The first two themes highlight HRM professionals' perceived subjective uncertainty regarding algorithmic HRM and its perceived negative effects on the organization. The third theme acknowledges the positive effect of algorithmic HRM, and the final theme discusses three critical coping strategies (embrace, avoid, and collaborate) that HRM professionals adopt to counteract their experienced fears. Our findings suggest that HRM professionals adopt a cautiously fearful rather than a wholly adverse outlook towards algorithmic HRM, wherein aversion and appreciation appear to emerge simultaneously. We contend this existence of a duality of perceptual responses to algorithmic HRM may be a precursor to setting a harmonious collaboration between humans and algorithms in the HRM domain, contingent on appropriate levels of oversight and governance. Implications for theory and managerial practice are also discussed.
This study investigates people's perceptions of AI decision-making as compared to human decision-making within the job application context. It takes into account both favorable and unfavorable outcomes, employing ...
详细信息
This study investigates people's perceptions of AI decision-making as compared to human decision-making within the job application context. It takes into account both favorable and unfavorable outcomes, employing a 2 x 2 experimental design (decision-making agent: AI algorithm vs. human;outcome: favorable vs. unfavorable). Upon evaluating a job seeker's suitability for a position, participants viewed algorithmic decisions as fairer, more competent, more trustworthy, and more useful than those made by humans. Interestingly, when a candidate was deemed unsuitable for hiring, people reacted more negatively to the verdict given by a human than to the same judgment offered by AI. Moreover, participants credited algorithmic decisions with greater sensitivity to both quantitative and qualitative qualifications, thus indicating algorithmic appreciation. Our findings shed light on the psychological basis of perceptions surrounding algorithmic Decision-Making (ADM) and the responses to the decisions rendered by ADM systems.
We propose a two by two experiment that investigates how humans respond to recommendations based on the difficulty of the task and the source of the recommendation. The two types of information sources which will prov...
详细信息
ISBN:
(纸本)9781733632546
We propose a two by two experiment that investigates how humans respond to recommendations based on the difficulty of the task and the source of the recommendation. The two types of information sources which will provide recommendations in our experiment are algorithms and human crowds. We contribute to the burgeoning discourse on algorithmic appreciation by focusing on crowd counting, a task in which effort is a strong factor in predicting accuracy.
Despite increasing reliance on personalization in digital platforms, many algorithms that curate content or information for users have been met with resistance. When users feel dissatisfied or harmed by recommendation...
详细信息
ISBN:
(纸本)9781450392785
Despite increasing reliance on personalization in digital platforms, many algorithms that curate content or information for users have been met with resistance. When users feel dissatisfied or harmed by recommendations, this can lead users to hate, or feel negatively towards these personalized systems. algorithmic hate detrimentally impacts both users and the system, and can result in various forms of algorithmic harm, or in extreme cases can lead to public protests against "the algorithm" in question. In this work, we summarize some of the most common causes of algorithmic hate and their negative consequences through various case studies of personalized recommender systems. We explore promising future directions for the RecSys research community that could help alleviate algorithmic hate and improve the relationship between recommender systems and their users.
With the increased sophistication of technology, humans have the possibility to offload a variety of tasks to algorithms. Here, we investigated whether the extent to which people are willing to offload an attentionall...
详细信息
With the increased sophistication of technology, humans have the possibility to offload a variety of tasks to algorithms. Here, we investigated whether the extent to which people are willing to offload an attentionally demanding task to an algorithm is modulated by the availability of a bonus task and by the knowledge about the algorithm's capacity. Participants performed a multiple object tracking (MOT) task which required them to visually track targets on a screen. Participants could offload an unlimited number of targets to a "computer partner". If participants decided to offload the entire task to the computer, they could instead perform a bonus task which resulted in additional financial gain-however, this gain was conditional on a high performance accuracy in the MOT task. Thus, participants should only offload the entire task if they trusted the computer to perform accurately. We found that participants were significantly more willing to completely offload the task if they were informed beforehand that the computer's accuracy was flawless (Experiment 1 vs. 2). Participants' offloading behavior was not significantly affected by whether the bonus task was incentivized or not (Experiment 2 vs. 3). These results combined with those from our previous study (Wahn et al. in PLoS ONE 18:e0286102, 2023), which did not include a bonus task but was identical otherwise, show that the human willingness to offload an attentionally demanding task to an algorithm is considerably boosted by the availability of a bonus task-even if not incentivized-and by the knowledge about the algorithm's capacity.
Algorithms have long outperformed humans in tasks with objective answers. In medicine, finance, chess, and other objective fields, AI have been shown to consistently outperform human cognition. However, algorithms cur...
详细信息
ISBN:
(纸本)9781733632546
Algorithms have long outperformed humans in tasks with objective answers. In medicine, finance, chess, and other objective fields, AI have been shown to consistently outperform human cognition. However, algorithms currently underperform human cognition in creative tasks, such as writing fiction or brainstorming ideas. We propose a study that investigates how humans rely on algorithmic and human recommendations differently in creative tasks, and whether that effect changes based on task difficulty. We synthesize the current theoretical landscape and propose what the likely effects of algorithmic versus human recommendations are on cognitive effort, belief change, and confidence in output.
暂无评论