This paper addresses the growing integration of Augmented Reality (AR) in biomedical sciences, emphasizing collaborative learning experiences. We present MultiAR, a versatile, domain-specific platform enabling multi-u...
详细信息
ISBN:
(数字)9798350371499
ISBN:
(纸本)9798350371505
This paper addresses the growing integration of Augmented Reality (AR) in biomedical sciences, emphasizing collaborative learning experiences. We present MultiAR, a versatile, domain-specific platform enabling multi-user interactions in AR for biomedical education. Unlike platform-specific solutions, MultiAR supports various AR devices, including handheld and head-mounted options. The framework extends across domains, augmenting biomedical education applications with collaborative capabilities. We define essential requirements for a multi-user AR framework in education, detail MultiAR’s design and implementation, and comprehensively evaluate it using anatomy education examples. Quantitative and qualitative analyses, covering system performance, accuracy metrics, and a user study with 20 participants, highlight the urgent need for a tailored collaborative AR platform in biomedical education. Results underscore enthusiasm for collaborative AR technology, endorsing MultiAR as an accessible, versatile solution for developers and end-users in biomedical education.
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the l...
详细信息
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computervision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computervision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image,
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BC...
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth.
暂无评论