作者:
Bottemanne, HugoUniv Paris Saclay
Fac Med Paris Saclay CESP Ctr Rech Epidemiol & Sante Populat MOODS TeamINSERM 1018 Le Kremlin Bicetre France Paris Saclay Univ
Bicetre Hosp Assistance Publ Hop Paris AP HP Mood Ctr Paris SaclayDept PsychiatDMU Neurosci Le Kremlin Bicetre France Inst Natl Sante & Rech Med INSERM U1127
Paris Brain Inst Inst Cerveau Paris France
Bayesian brain theory, a computational framework grounded in the principles of Predictive Processing (PP), proposes a mechanistic account of how beliefs are formed and updated. This theory assumes that the brain encod...
详细信息
Bayesian brain theory, a computational framework grounded in the principles of Predictive Processing (PP), proposes a mechanistic account of how beliefs are formed and updated. This theory assumes that the brain encodes a generative model of its environment, made up of probabilistic beliefs organized in networks, from which it generates predictions about future sensory inputs. The difference between predictions and sensory signals produces prediction errors, which are used to update belief networks. In this article, we introduce the fundamental principles of Bayesian brain theory, and show how the brain dynamics of prediction are associated with the generation and evolution of beliefs.
computational neuroscience offers a valuable opportunity to understand the neural mechanisms underlying behavior. However, interpreting individual differences in these mechanisms, such as developmental differences, is...
详细信息
computational neuroscience offers a valuable opportunity to understand the neural mechanisms underlying behavior. However, interpreting individual differences in these mechanisms, such as developmental differences, is less straightforward. We illustrate this challenge through studies that examine individual differences in reinforcement learning. In these studies, a computational model generates an individual-specific prediction error regressor to model activity in a brain region of interest. Individual differences in the resulting regression weight are typically interpreted as individual differences in neural coding. We first demonstrate that the absence of individual differences in neural coding is not problematic, as such differences are already captured in the individual specific regressor. We then review that the presence of individual differences is typically interpreted as individual differences in the use of brain resources. However, through simulations, we illustrate that these differences could also stem from other factors such as the standardization of the prediction error, individual differences in brain networks outside the region of interest, individual differences in the duration of the prediction error response, individual differences in outcome valuation, and in overlooked individual differences in computational model parameters or the type of computational model. To clarify these interpretations, we provide several recommendations. In this manner we aim to advance the understanding and interpretation of individual differences in computational neuroscience.
Over the past two decades, considerable strides have been made in advancing neuroscience techniques, yet the translation of these advancements into clinically relevant insights for human mental health remains a challe...
详细信息
In this paper I criticize a view of functional localization in neuroscience, which I call "computational absolutism" (CA). "Absolutism" in general is the view that each part of the brain should be ...
详细信息
In this paper I criticize a view of functional localization in neuroscience, which I call "computational absolutism" (CA). "Absolutism" in general is the view that each part of the brain should be given a single, univocal function ascription. Traditional varieties of absolutism posit that each part of the brain processes a particular type of information and/or performs a specific task. These function attributions are currently beset by physiological evidence which seems to suggest that brain areas are multifunctional-that they process distinct information and perform different tasks depending on context. Many theorists take this contextual variation as inimical to successful localization, and claim that we can avoid it by changing our functional descriptions to computational descriptions. The idea is that we can have highly generalizable and predictive functional theories if we can discover a single computation performed by each area regardless of the specific context in which it operates. I argue, drawing on computational models of perceptual area MT, that this computational version of absolutism fails to come through on its promises. In MT, the modeling field has not produced a univocal computational description, but instead a plurality of models analyzing different aspects of MT function. Moreover, CA cannot appeal to theoretical unification to solve this problem, since highly general models, on their own, neither explain nor predict what MT does in any particular context. I close by offering a perspective on neural modeling inspired by Nancy Cartwright's and Margaret Morrison's views of modeling in the physical sciences.
In recent years, the application of computational modeling in studies on age-related changes in decision making and learning has gained in popularity. One advantage of computational models is that they provide access ...
详细信息
In recent years, the application of computational modeling in studies on age-related changes in decision making and learning has gained in popularity. One advantage of computational models is that they provide access to latent variables that cannot be directly observed from behavior. In combination with experimental manipulations, these latent variables can help to test hypotheses about age-related changes in behavioral and neurobiological measures at a level of specificity that is not achievable with descriptive analysis approaches alone. This level of specificity can in turn be beneficial to establish the identity of the corresponding behavioral and neurobiological mechanisms. In this paper, we will illustrate applications of computational methods using examples of lifespan research on risk taking, strategy selection and reinforcement learning. We will elaborate on problems that can occur when computational neuroscience methods are applied to data of different age groups. Finally, we will discuss potential targets for future applications and outline general shortcomings of computational neuroscience methods for research on human lifespan development.
Several interconnected brain circuits such as cerebellum, cerebral cortex, thalamus and basal ganglia process motor information in many species including mammals. Interconnection between basal ganglia and cerebellum t...
详细信息
ISBN:
(纸本)9781509063673
Several interconnected brain circuits such as cerebellum, cerebral cortex, thalamus and basal ganglia process motor information in many species including mammals. Interconnection between basal ganglia and cerebellum through thalamus and cortex may influence the pathways involved in basal ganglia processing. Malfunctions in the neural circuitry of basal ganglia influenced by modifications in the dopaminergic system, which are liable for an array of motor disorders and slighter cognitive issues in Parkinson's disease. Both basal ganglia and cerebellum receives input from and send output to the cerebral cortex and these structures influence motor and cognitive operations through cerebellar-thalamo-basal ganglia cortical circuit. This interconnected circuit (basal ganglia cerebellum) helps to understand the role of cerebellum in motor dysfunction during Parkinson's disease. To develop models of unsupervised learning as in brain circuits, we modelled sub thalamic nucleus, internal and external parts of Globus pallidus, fast spiking striatal neuron and medium spiny neuron in striatum using Adaptive Exponential Integrate and Fire model. Simulations highlight the correlation between firing of GPe and level of dopamine and the changes induced during simulated Parkinson's disease. Such models are crucial to understand the motor processing and for developing spiking based deep learning algorithms.
Objective: Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provid...
详细信息
Objective: Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods: Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results: Building on these standard practices, model-sharing sites and tools have been developed that fit into several categories: 1) standardized neural simulators;2) shared computational resources;3) declarative model descriptors, ontologies, and standardized annotations;and 4) model-sharing repositories and sharing standards. Conclusion: A number of complementary innovations have been proposed to enhance sharing, transparency, and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation, and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance: Model management will become increasingly important as multiscale models become larger, more detailed, and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment.
On the basis of a computational and neurodynamical model, we investigate a cognitive impairment in stroke patients termed visual neglect. The model is based on the "biased competition hypothesis" and is stru...
详细信息
On the basis of a computational and neurodynamical model, we investigate a cognitive impairment in stroke patients termed visual neglect. The model is based on the "biased competition hypothesis" and is structured in several network modules which are related to the dorsal and ventral pathway in the visual cortex. By damaging the model, visual neglect can be simulated and explained as an unbalanced neurodynamical competition. We predict that acquiring knowledge of objects can increase the frequency of saccades to previously ignored object parts, This prediction is confirmed in a single case study by monitoring eye movements of a neglect patient. (C) 2002 Published by Elsevier Science B.V.
暂无评论