To successfully build a deep learning model, it will need a large amount of labeled data. However, labeled data are hard to collect in many use cases. To tackle this problem, a bunch of data augmentation methods have ...
详细信息
Background: Despite major advances in artificial intelligence (AI) research for healthcare, the deployment and adoption of AI technologies remain limited in clinical practice. In recent years, concerns have been raise...
详细信息
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, inte...
详细信息
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols –such as knowledge graphs– are easier to explain. However, they present lower generalisation and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: 1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and 2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to i
data-driven anomaly detection methods typically build a model for the normal behavior of the target system, and score each data instance with respect to this model. A threshold is invariably needed to identify data in...
详细信息
A foundational set of findable, accessible, interoperable, and reusable (FAIR) principles were proposed in 2016 as prerequisites for proper data management and stewardship, with the goal of enabling the reusability of...
详细信息
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment pl...
详细信息
This work exploits Riemannian manifolds to build a sequential-clustering framework able to address a wide variety of clustering tasks in dynamic multilayer (brain) networks via the information extracted from their nod...
详细信息
Convolutional neural networks (CNNs) have recently achieved impressive performances in image processing tasks such as image classification and object recognition. However, CNNs only process images in the spatial domai...
Convolutional neural networks (CNNs) have recently achieved impressive performances in image processing tasks such as image classification and object recognition. However, CNNs only process images in the spatial domain whereas spectral analysis operates in the frequency domain. In this paper, we propose the 2D wavelet transform layer. The learned features from images are viewed as two-directional sequential data, and we use two LSTM layers that sweep both horizontally and vertically across the image to compress feature matrices. Based on this, the 2D wavelet transform decomposes the above feature matrices as a learned mixing of different harmonic functions, and therefore integrating the spectral analysis into CNNs. We also select 3 × 3 convolutional mixing style with Gaussian+LSM filter to mix the output of the 2D wavelet transform to generate new output features. Finally, we combine the sequential and spectral features to build our CNN-RNN architecture with skip layers and apply it to image classification. Our proposed network is evaluated on three widely-used benchmark datasets: CIFAR-10, CIFAR-100 and Tiny ImageNet. Experiments show that our CNN-RNN hybrid model achieves better accuracy in image classification tasks.
In social networks, interaction patterns typically change over time. We study opinion dynamics on tie-decay networks in which tie strength increases instantaneously when there is an interaction and decays exponentiall...
详细信息
The cell cycle is a complex biological phenomenon, which plays an important role in many cell biological processes and disease states. Machine learning is emerging to be a pivotal technique for the study of the cell c...
详细信息
The cell cycle is a complex biological phenomenon, which plays an important role in many cell biological processes and disease states. Machine learning is emerging to be a pivotal technique for the study of the cell cycle, resulting in a number of available tools and models for the analysis of the cell cycle. Most, however, heavily rely on expert annotations, prior knowledge of mechanisms, and imaging with several fluorescent markers to train their models. Many are also limited to processing only the spatial information in the cell images. In this work, we describe a different approach based on representation learning to construct a manifold of the cell life cycle. We trained our model such that the representations are learned without exhaustive annotations nor assumptions. Moreover, our model uses microscopy images derived from a single fluorescence channel and utilizes both the spatial and temporal information in these images. We show that even with fewer channels and self-supervision, information relevant to cell cycle analysis such as staging and estimation of cycle duration can still be extracted, which demonstrates the potential of our approach to aid future cell cycle studies and in discovery cell biology to probe and understand novel dynamic systems.
暂无评论