作者:
Del-Bosque-Trevino, JorgeHough, JulianPurver, MatthewEducation Lab.
Computational Linguistics Lab. Cognitive Science Research Group School of Electronic Engineering and Computer Science Queen Mary University of London United Kingdom Human Interaction Lab.
Computational Linguistics Lab. Cognitive Science Research Group School of Electronic Engineering and Computer Science Queen Mary University of London United Kingdom Jožef Stefan Institute
Computational Linguistics Lab. Cognitive Science Research Group School of Electronic Engineering and Computer Science Queen Mary University of London United Kingdom
We present a conversational management act (CMA) annotation schema for one-to-one tutorial dialogue sessions where a tutor uses an analogy to teach a student a concept. CMAs are more fine-grained sub-utterance acts co...
Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development o...
Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development of explainable AI methods, which aim to provide insights into complex algorithms through explanations that are comprehensible to humans. However, many of the explanations currently available are technically focused and reflect what machine learning researchers believe constitutes a good explanation, rather than what users actually want. This paper highlights the need to develop human-centred explanations for machine learning-based clinical decision support systems, as clinicians who typically have limited knowledge of machine learning techniques are the users of these systems. The authors define the requirements for human-centred explanations, then briefly discuss the current state of available explainable AI methods, and finally analyse the gaps between human-centred explanations and current explainable AI methods. A clinical use case is presented to demonstrate the vision for human-centred explanations.
This study investigated the differences between human and robot gaze in influencing preference formation, and examined the role of Theory of Mind (ToM) abilities in this process. human eye gaze is one of the most impo...
详细信息
In healthcare, legible prescription information is crucial but often compromised by hurried, illegible handwriting. This can lead to misinterpretation and errors in medication dispensing, posing risks to patient safet...
详细信息
Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preserv...
详细信息
The optical vortex has recently attracted scholars to implement it in optical tweezers, microscopy, optical communications, quantum information processing, optical trapping, and laser machining. Optical vortex beam ap...
详细信息
We explore the efficacy of multimodal behavioral cues for explainable prediction of personality and interview-specific traits. We utilize elementary head-motion units named kinemes, atomic facial movements termed acti...
详细信息
This paper presents the results of an online questionnaire study (N = 97) which examined participants' anticipated acceptance of crash-control-algorithms (CCAs, i.e., algorithms aimed at effecting certain ethical ...
详细信息
Massive Open Online Courses (MOOCs) are a type of Learning Management Systems (LMSs), but it seems that the influence of the instructor in these systems is minimal or simply lacking. These systems present the learning...
详细信息
While the a-wave of mouse electroretinogram (ERG) occurs within 50 milliseconds after exposure to light, the optoretinogram (ORG) slower than a 20Hz sampling rate could face limitations in observing immediate morpholo...
详细信息
暂无评论