The reappearance of human visual perception is a challenging topic in the field of brain decoding. Due to the complexity of visual stimuli and the constraints of fMRI data collection, the present decoding methods can ...
详细信息
The reappearance of human visual perception is a challenging topic in the field of brain decoding. Due to the complexity of visual stimuli and the constraints of fMRI data collection, the present decoding methods can only reconstruct the basic outline or provide similar figures/features of the perceived natural stimuli. To achieve a high-quality and high-resolution reconstruction of natural images from brain activity, this paper presents an end-to-end perception reconstruction model called the similarity-conditions generative adversarial network (SC-GAN), where visually perceptible images are reconstructed based on human visual cortex responses. The SC-GAN extracts the high-level semantic features of natural images and corresponding visual cortical responses and then introduces the semantic features as conditions of generative adversarial networks (GANs) to realize the perceptual reconstruction of visual images. The experimental results show that the semantic features extracted from SC-GAN play a key role in the reconstruction of natural images. The similarity between the presented and reconstructed images obtained by the SC-GAN is significantly higher than that obtained by a condition generative adversarial network (C-GAN). The model we proposed offers a potential perspective for decoding the brain activity of complex natural stimuli.
Eye movements bring attended visual inputs to the center of vision for further processing. Thus, central and peripheral vision should have different functional roles. Here, we use observations of visual perception und...
详细信息
Eye movements bring attended visual inputs to the center of vision for further processing. Thus, central and peripheral vision should have different functional roles. Here, we use observations of visual perception under dichoptic stimuli to infer that there is a difference in the top-down feedback from higher brain centers to primary visual cortex. visual stimuli to the two eyes were designed such that the sum and difference of the binocular input from the two eyes have the form of two different gratings. These gratings differed in their motion direction, tilt direction, or color, and duly evoked ambiguous percepts for the corresponding feature. Observers were more likely to perceive the feature in the binocular summation rather than the difference channel. However, this perceptual bias towards the binocular summation signal was weaker or absent in peripheral vision, even when central and peripheral vision showed no difference in contrast sensitivity to the binocular summation signal relative to that to the binocular difference signal. We propose that this bias can arise from top-down feedback as part of an analysis-by-synthesis computation. The feedback is of the input predicted using prior information by the upper level perceptual hypothesis about the visual scene;the hypothesis is verified by comparing the feedback with the actual visual input. We illustrate this process using a conceptual circuit model. In this framework, a bias towards binocular summation can arise from the prior knowledge that inputs are usually correlated between the two eyes. Accordingly, a weaker bias in the periphery implies that the top-down feedback is weaker there. Testable experimental predictions are presented and discussed. (C) 2017 Elsevier Ltd. All rights reserved.
visual encoding models and visual decoding models are all used for revealing the relationship of external visual stimuli and brain activities. On the view of the goal, visual encoding and decoding are opposite, but th...
详细信息
ISBN:
(纸本)9781467376822
visual encoding models and visual decoding models are all used for revealing the relationship of external visual stimuli and brain activities. On the view of the goal, visual encoding and decoding are opposite, but they are also complementary. With further researching, we could not use a simple decoding model to excavate more complex information. The combination of visual encoding model and decoding model is becoming a trend of visual research. This paper has built a visual encoding model based on a convolutional neural network and has proposed an approach to select features based on the visual encoding model. The results show that our predicted voxels are similar to the selected real voxels, the correlation coefficients of them are pretty high. On the other hand, the classification results are better than those of t-value method and principal feature analysis (PFA) method. Our approach provides a reference to the combination of visual encoding and decoding models.
Determination of effective visual response time window is premise of studying object cognition and decision-making mechanisms in avian object-oriented task. However, it is difficult to determine the effective response...
详细信息
ISBN:
(纸本)9789881563804
Determination of effective visual response time window is premise of studying object cognition and decision-making mechanisms in avian object-oriented task. However, it is difficult to determine the effective response time window for freely moving birds. In this study, video data of pigeon's face in the target-oriented task was collected, and a neural network algorithm based on Faster RCNN was applied to train pigeon face prediction model, which was further used to predict the time window of pigeon observing the specific image target. The length of time window were estimated based on the statistical results of 177 trials, and the specific time window for each trial was then determined by combining the start frame that contained pigeon's frontal faces. Finally, the proposed method was verified using data from two pigeons trained for object-oriented task. Taking the firing rates feature population recorded from pigeon's ectostriatum as an example, the mean firing rate during the estimated effective time window and the original mean firing rate without the time window were sent to SVM and KNN classifier respectively to decode the observed object category. The comparison results showed that the classification accuracy of both classifiers were significantly improved with our method, proving that the proposed method could obtained effective response to specific visual object for freely moving birds.
Determination of effective visual response time window is premise of studying object cognition and decision-making mechanisms in avian object-oriented ***,it is difficult to determine the effective response time windo...
详细信息
Determination of effective visual response time window is premise of studying object cognition and decision-making mechanisms in avian object-oriented ***,it is difficult to determine the effective response time window for freely moving *** this study,video data of pigeon’s face in the target-oriented task was collected,and a neural network algorithm based on Faster RCNN was applied to train pigeon face prediction model,which was further used to predict the time window of pigeon observing the specific image *** length of time window were estimated based on the statistical results of 177 trials,and the specific time window for each trial was then determined by combining the start frame that contained pigeon’s frontal ***,the proposed method was verified using data from two pigeons trained for object-oriented *** the firing rates feature population recorded from pigeon’s ectostriatum as an example,the mean firing rate during the estimated effective time window and the original mean firing rate without the time window were sent to SVM and KNN classifier respectively to decode the observed object *** comparison results showed that the classification accuracy of both classifiers were significantly improved with our method,proving that the proposed method could obtained effective response to specific visual object for freely moving birds.
暂无评论