Purpose: Asynchronous Video Interviews (AVIs) incorporating Artificial Intelligence (AI)-assisted assessment has become popular as a pre-employment screening method. The extent to which applicants engage in deceptive ...
详细信息
Purpose: Asynchronous Video Interviews (AVIs) incorporating Artificial Intelligence (AI)-assisted assessment has become popular as a pre-employment screening method. The extent to which applicants engage in deceptive impression management (IM) behaviors during these interviews remains uncertain. Furthermore, the accuracy of human detection in identifying such deceptive IM behaviors is limited. This study seeks to explore differences in deceptive IM behaviors by applicants across video interview modes (AVIs vs Synchronous Video Interviews (SVIs)) and the use of AI-assisted assessment (AI vs non-AI). The study also investigates if video interview modes affect human interviewers' ability to detect deceptive IM behaviors. Design/methodology/approach: The authors conducted a field study with four conditions based on two critical factors: the synchrony of video interviews (AVI vs SVI) and the presence of AI-assisted assessment (AI vs Non-AI): Non-AI-assisted AVIs, AI-assisted AVIs, Non-AI-assisted SVIs and AI-assisted SVIs. The study involved 144 pairs of interviewees and interviewers/assessors. To assess applicants' deceptive IM behaviors, the authors employed a combination of interviewee self-reports and interviewer perceptions. Findings: The results indicate that AVIs elicited fewer instances of deceptive IM behaviors across all dimensions when compared to SVIs. Furthermore, using AI-assisted assessment in both video interview modes resulted in less extensive image creation than non-AI settings. However, the study revealed that human interviewers had difficulties detecting deceptive IM behaviors regardless of the mode used, except for extensive faking in AVIs. Originality/value: The study is the first to address the call for research on the impact of video interview modes and AI on interviewee faking and interviewer accuracy. This research enhances the authors’ understanding of the practical implications associated with the use of different video interview modes and AI
According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorit...
详细信息
According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a "dehumanization" of the hiring process. Our main goals in this paper are threefold: (i) to bring attention to this neglected issue, (ii) to clarify what exactly this concern about dehumanization might amount to, and (iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term "dehumanization" in this context (i.e. removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e. conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee-employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen's (in: Lackey, Applied Epistemology, Oxford University Press, 2021) critique of how Twitter "gamifies communication", we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-offs will need to be made.
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart ...
详细信息
ISBN:
(纸本)9781728112206
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for transporting data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble-based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13% to 258%.
Artificial Intelligence (AI) has been used extensively in automatic decision making in a broad variety of scenarios, ranging from credit ratings for loans to recommendations of movies. Traditional design guidelines fo...
详细信息
ISBN:
(纸本)9781450360128
Artificial Intelligence (AI) has been used extensively in automatic decision making in a broad variety of scenarios, ranging from credit ratings for loans to recommendations of movies. Traditional design guidelines for AI models focus essentially on accuracy maximization, but recent work has shown that economically irrational and socially unacceptable scenarios of discrimination and unfairness are likely to arise unless these issues are explicitly addressed. This undesirable behavior has several possible sources, such as biased datasets used for training that may not be detected in black-box models. After pointing out connections between such bias of AI and the problem of induction, we focus on Popper's contributions after Hume's, which offer a logical theory of preferences. An AI model can be preferred over others on purely rational grounds after one or more attempts at refutation based on accuracy and fairness. Inspired by such epistemological principles, this paper proposes a structured approach to mitigate discrimination and unfairness caused by bias in AI systems. In the proposed computational framework, models are selected and enhanced after attempts at refutation. To illustrate our discussion, we focus on hiring decision scenarios where an AI system filters in which job applicants should go to the interview phase.
暂无评论