Over the past decade, deep learning has had a revolutionary impact on a broad range of fields such as computer vision and image processing, computational photography, medical imaging and speech and language analysis a...
详细信息
Over the past decade, deep learning has had a revolutionary impact on a broad range of fields such as computer vision and image processing, computational photography, medical imaging and speech and language analysis and synthesis etc. Deep learning technologies are estimated to have added billions in business value, created new markets, and transformed entire industrial segments. Most of today’s successful deep learning methods such as Convolutional Neural Networks (CNNs) rely on classical signal processing models that limit their applicability to data with underlying Euclidean grid-like structure, e.g., images or acoustic signals. Yet, many applications deal with non-Euclidean (graph- or manifold-structured) data. For example, in social network analysis the users and their attributes are generally modeled as signals on the vertices of graphs. In biology protein-to-protein interactions are modeled as graphs. In computer vision & graphics 3D objects are modeled as meshes or point clouds. Furthermore, a graph representation is a very natural way to describe interactions between objects or signals. The classical deep learning paradigm on Euclidean domains falls short in providing appropriate tools for such kind of data. Until recently, the lack of deep learning models capable of correctly dealing with non-Euclidean data has been a major obstacle in these fields. This special section addresses the need to bring together leading efforts in non-Euclidean deep learning across all communities. From the papers that the special received twelve were selected for publication. The selected papers can naturally fall in three distinct categories: (a) methodologies that advance machine learning on data that are represented as graphs, (b) methodologies that advance machine learning on manifold-valued data, and (c) applications of machine learning methodologies on non-Euclidean spaces in computer vision and medical imaging. We briefly review the accepted papers in each of the groups.
Objective: Deep brain stimulation (DBS) is a surgical technique that alleviates motor symptoms in Parkinson's disease. Surgically implanted microelectrodes stimulate the basal ganglia to improve patients' symp...
详细信息
Objective: Deep brain stimulation (DBS) is a surgical technique that alleviates motor symptoms in Parkinson's disease. Surgically implanted microelectrodes stimulate the basal ganglia to improve patients' symptoms. One of the training challenges for neurophysiologists is to identify during surgery the target area of the brain in which the electrodes must be implanted. Identification is based both on visual and auditory inspection of the micro- electrode recordings (MERs) as they are lowered through the basal ganglia. We present the preliminary evaluation of DBSTrainer, a novel desktop application to train neurophysiologists in the identification of signals corresponding to different basal structures. Methods: A pilot study was conducted with neurologists and neurophysiologists at the Hospital Universitario La Paz (Madrid, Spain). After completing a series of tasks with the application, they were asked to complete an evaluation questionnaire. Usability was assessed using the System Usability Scale (SUS). Functionality, contents, and perceived usefulness were assessed using an ad-hoc Likert questionnaire following the e-MIS framework for surgical learning platforms. Results: 15 volunteers participated in the study. Obtained SUS score was 86.7 +/- 0.47. Most positive aspects on functionality were platform design and interactivity. Contents were found realistic and aligned with learning outcomes. Minor problems were identified with signal loading times. Conclusions: This study provides preliminary evidence on the usefulness of DBSTrainer. It is, to our knowledge, the first Technology Enhanced Learning application to train neurophysiologists outside the operating room, and thus its introduction can have a real impact on patient safety and surgical outcomes.
As large, high-dimensional data have become more common, software development is playing an increasingly important role in research across many different fields. This creates a need to adopt software engineering pract...
详细信息
As large, high-dimensional data have become more common, software development is playing an increasingly important role in research across many different fields. This creates a need to adopt software engineering practices in research settings. Code review is the engineering practice of giving and receiving detailed feedback on a computer program. Consistent and continuous examination of the evolution of computer programs by others has been shown to be beneficial, especially when reviewers are familiar with the technical aspects of the software and also when they possess relevant domain expertise. The rules described in the present article provide information about the why, when, and how of code review. They provide the motivation for continual code reviews as a natural part of a rigorous research program. They provide practical guidelines for conducting review of code both in person, as a "lab meeting for code," as well as asynchronously, using industry-standard online tools. A set of guidelines is provided for the nitty-gritty details of code review, as well as for the etiquette of such a review. Both the technical and the social aspects of code review are covered to provide the reader with a comprehensive approach that facilitates an effective, enjoyable, and educational approach to code review. Scientists are increasingly writing code as part of their research. Code review is a common practice in software engineering, which entails detailed and continual examination of additions and changes to a software code-base. This article explains why and how this practice is applied to the software that researchers write as part of their work. It provides a set of rules that motivates, explicates, and details the process of using code review in a didactic, effective, and enjoyable manner.
Monolithic 3D integration technology has emerged as an alternative candidate to conventional transistor scaling. Unlike conventional processes where multiple metal layers are fabricated above a single transistor layer...
详细信息
Monolithic 3D integration technology has emerged as an alternative candidate to conventional transistor scaling. Unlike conventional processes where multiple metal layers are fabricated above a single transistor layer, monolithic 3D technology enables multiple transistor layers above a single substrate. By providing vertical interconnects with physical dimensions similar to conventional metal vias, monolithic 3D technology enables unprecedented integration density and high bandwidth communication, which plays a critical role for various data-centric applications. Despite growing number of research efforts on various aspects of monolithic 3D integration, commercial monolithic 3D ICs do not yet exist. This tutorial brief provides a concise overview of monolithic 3D technology, highlighting important results and future prospects. Several applications that can potentially benefit from this technology are also discussed.
In the last decade, there has been an explosion in the progress and applications of artificial intelligence (AI) in our society. For the first time, the applications of AI have left the laboratory to reach society in ...
详细信息
In the last decade, there has been an explosion in the progress and applications of artificial intelligence (AI) in our society. For the first time, the applications of AI have left the laboratory to reach society in a broad, visible, and relevant way. This fact has raised numerous questions about the potential of AI in the future and its implications in our lives. The benefits of AI do not come alone;they also bring with them responsibilities that if not considered properly can become misuses, intentional or not.
Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term. A set of criteria is introduced to address this prob...
详细信息
Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term. A set of criteria is introduced to address this problem: completeness (no execution requirement beyond a minimal Unix-like operating system, no administrator privileges, no network connection, and storage primarily in plain text);modular design;minimal complexity;scalability;verifiable inputs and outputs;version control;linking analysis with narrative;and free and open-source software. As a proof of concept, we introduce "Maneage" (managing data lineage), enabling cheap archiving, provenance extraction, and peer verification that has been tested in several research publications. We show that longevity is a realistic requirement that does not sacrifice immediate or short-term reproducibility. The caveats (with proposed solutions) are then discussed and we conclude with the benefits for the various stakeholders. This article is itself a Maneage'd project (project commit 313db0b). Appendices-Two comprehensive appendices that review the longevity of existing solutions are available as supplementary "Web extras," which are available in the IEEE computer Society Digital Library at http://***/10.1109/MCSE.2021.3072860. Reproducibility-All products available in zenodo.4913277, the Git history of this paper's source is at ***/***, which is also archived in Software Heritage Heritage: swh:1:dir:33fea87068c1612daf011f161b97787b9a0df39f. Clicking on the SWHIDs in the digital format will provide more "context" for same content.
Strategies aimed at keeping the user's interest in using computer applications are being studied to provide greater user engagement, and can influence how people interact with computers. One of the approaches that...
详细信息
Strategies aimed at keeping the user's interest in using computer applications are being studied to provide greater user engagement, and can influence how people interact with computers. One of the approaches that can promote user engagement is Affective Computing (AC), based on the premise of recognizing the user's emotional state and adjusting the computer application to respond to such state in real-time. Although it is a relatively new area, over the past few years many research works have investigated the use of AC in various activities and objectives. To provide an overview on the use of AC in computer applications, this article presents a systematic literature review based on available articles on the main scientific databases of the computer Science area. The main contribution of this review is the analysis of different types of applications. Based on the 58 articles analyzed, the main emotion recognition techniques and approaches to the adaptation of computer applications, as well as the limitations and challenges to be overcome were compiled. Our conclusions present the limitations and challenges still to be overcome in the area of automatic adaptation of computer applications by means of AC.
This book constitutes the proceedings of the 16th International Conference on Social, Cultural, and Behavioral Modeling, SBP-BRiMS 2023, which was held in Pittsburgh, PA, USA in September 2023.;The 31 full papers pres...
详细信息
ISBN:
(数字)9783031431296
ISBN:
(纸本)9783031431289
This book constitutes the proceedings of the 16th International Conference on Social, Cultural, and Behavioral Modeling, SBP-BRiMS 2023, which was held in Pittsburgh, PA, USA in September 2023.;The 31 full papers presented in this volume were carefully reviewed and selected from 73 submissions. The papers were organized in topical sections as follows: Detecting malign influence; human behavior modeling; and social-cyber behavior modeling.
暂无评论