Medical imaging is spearheading the AI transformation of healthcare. Performance reporting is key to determine which methods should be translated into clinical practice. Frequently, broad conclusions are simply derive...
详细信息
Outlier detection is one of the main fields in machinelearning and it has been growing rapidly due to its wide range of applications. In the last few years, deep learning-based methods have outperformed machine learn...
详细信息
ISBN:
(纸本)9781450388894
Outlier detection is one of the main fields in machinelearning and it has been growing rapidly due to its wide range of applications. In the last few years, deep learning-based methods have outperformed machinelearning and handcrafted outlier detection techniques, and our method is no different. We present a new twist to generative models which leverages variational autoencoders as a source for uniform distributions which can be used to separate the inliers from the outliers. Both the generative and adversarial parts of the model are used to obtain three main losses (Reconstruction loss, KL-divergence, Discriminative loss) which in return are wrapped with a one-class SVM which is used to make the predictions. We evaluated our method against several datasets both for images and tabular data and it has shown great results for the zero-shot outlier detection problem and was able to easily generalize it for supervised outlier detection tasks on which the performance has increased. For comparison, we evaluated our method against several of the common outlier detection techniques such as DBSCAN-based outlier detection, GMM, K-means and one class SVM directly, and we have outperformed all of them on all datasets.
This paper describes datascience history and behavioral trends on the largest platform for learning and competition in analyzing and modeling data; Kaggle. We analyze the history of methods commonly used in linear pr...
This paper describes datascience history and behavioral trends on the largest platform for learning and competition in analyzing and modeling data; Kaggle. We analyze the history of methods commonly used in linear predictor to predict, classify, cluster, and explore data sets. In addition, we also examine the use of the most widely used tools and frameworks to help make data modeling easier. The analysis was carried out on the forum discussion data for the last ten years based on the data available on meta-Kaggle. To see the future trend of datascience and linear predictor models, we analyzed the abstracts on the articles available on the Elsevier search page. We extracted information from them using a machinelearning method.
This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory. Taking distributed kernel ridge regression (DKRR) for example, we succeed in deriving its optim...
详细信息
Recently, there has been an increasing interest in (supervised) learning with graph data, especially using graph neural networks. However, the development of meaningful benchmark datasets and standardized evaluation p...
详细信息
We live in momentous times. The science community is empowered with an arsenal of cosmic messengers to study the Universe in unprecedented detail. Gravitational waves, electromagnetic waves, neutrinos and cosmic rays ...
详细信息
We propose a 3D volume-to-volume Generative Adversarial Network (GAN) for segmentation of brain tumours. The proposed model, called Vox2Vox, generates segmentations from multi-channel 3D MR images. The best results ar...
详细信息
We propose a method to learn a common bias vector for a growing sequence of low-variance tasks. Unlike state-of-the-art approaches, our method does not require tuning any hyper-parameter. Our approach is presented in ...
详细信息
This paper examines the ambiguity of subjective judgments, which are represented by a system of pairwise preferences over a given set of alternatives. Such preferences are valued with respect to a set of reasons, in f...
详细信息
暂无评论