Sparse models are particularly useful in scientific applications, such as biomarker discovery in genetic or neuroimaging data, where the interpretability of a predictive model is essential. Sparsity can also dramatica...
详细信息
ISBN:
(数字)9781439828700
ISBN:
(纸本)9781439828694
Sparse models are particularly useful in scientific applications, such as biomarker discovery in genetic or neuroimaging data, where the interpretability of a predictive model is essential. Sparsity can also dramatically improve the cost efficiency of signal *** Modeling: Theory, algorithms, and Applications provides an introduction t
In the interpretability literature, attention is focused on understanding black-box classifiers, but many problems ranging from medicine through agriculture and crisis response in humanitarian aid are tackled by seman...
详细信息
Brain imaging brings together the technology, methodology, research questions and approaches of a wide range of scientific fields including physics, statistics, computer science, neuroscience, biology, and engineering...
ISBN:
(数字)9783642347139
ISBN:
(纸本)9783642347122
Brain imaging brings together the technology, methodology, research questions and approaches of a wide range of scientific fields including physics, statistics, computer science, neuroscience, biology, and engineering. Thus, methodological and technological advances that enable us to obtain measurements, examine relationships across observations, and link these data to neuroscientific hypotheses happen in a highly interdisciplinary environment. The dynamic field of machine learning with its modern approach to data mining provides many relevant approaches for neuroscience and enables the exploration of open questions. This state-of-the-art survey offers a collection of papers from the Workshop on Machine learning and Interpretation in Neuroimaging, MLINI 2011, held at the 25th Annual Conference on Neural Information Processing, NIPS 2011, in the Sierra Nevada, Spain, in December 2011. Additionally, invited speakers agreed to contribute reviews on various aspects of the field, adding breadth and perspective to the volume. The 32 revised papers were carefully selected from 48 submissions. At the interface between machine learning and neuroimaging the papers aim at shedding some light on the state of the art in this interdisciplinary field. They are organized in topical sections on coding and decoding, neuroscience, dynamcis, connectivity, and probabilistic models and machine learning.
We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After ...
详细信息
We study two coarser approximations on top of a Kronecker factorization (K-FAC) of the Fisher Information Matrix, to scale up Natural Gradient to deep and wide Convolutional Neural Networks (CNNs). The first considers...
详细信息
We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstract discriminator feat...
详细信息
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes ca...
ISBN:
(纸本)9781510860964
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving ...
详细信息
Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgem...
This paper discusses how distribution matching losses, such as those used in CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of medical conditions. It seems appealing to use these new image ...
详细信息
暂无评论