Tinnitus is a common disorder that often complicates hearing loss. Its mechanisms are incompletely understood. Current theories proposing pathophysiology from the ear to the cortex cannot individually - or collectivel...
详细信息
Tinnitus is a common disorder that often complicates hearing loss. Its mechanisms are incompletely understood. Current theories proposing pathophysiology from the ear to the cortex cannot individually - or collectively - explain the range of experimental evidence available. We propose a new framework, based on predictive coding, in which spontaneous activity in the subcortical auditory pathway constitutes a 'tinnitus precursor' which is normally ignored as imprecise evidence against the prevailing percept of 'silence'. Extant models feature as contributory mechanisms acting to increase either the intensity of the precursor or its precision. If precision (i.e., postsynaptic gain) rises sufficiently then tinnitus is perceived. Perpetuation arises through focused attention, which further increases the precision of the precursor, and resetting of the default prediction to expect tinnitus.
Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the ...
详细信息
Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the latter supports schema-congruent information whereas the former is more engaged for incongruent or novel associations. Here, we attempted to explore some of the boundary conditions in the relative involvement of those structures in short-term memory for visual associations. The current literature is based primarily on intentional evaluation of schema-target congruence and on study-test paradigms with relatively long delays between learning and retrieval. We used a continuous recognition paradigm to investigate hippocampal and mPFC activation to first and second presentations of scene-object pairs as a function of semantic congruence between the elements (e.g., beach-seashell versus schoolyard-lamp). All items were identical at first and second presentation and the context scene, which was presented 500 ms prior to the appearance of the target object, was incidental to the task which required a recognition response to the central target only. Very short lags 2-8 intervening stimuli occurred between presentations. Encoding the targets with congruent contexts was associated with increased activation in visual cortical regions at initial presentation and faster response time at repetition, but we did not find enhanced activation in mPFC relative to incongruent stimuli at either presentation. We did observe enhanced activation in the right anterior hippocampus, as well as regions in visual and lateral temporal and frontal cortical regions, for the repetition of incongruent scene-object pairs. This pattern demonstrates rapid and incidental effects of schema processing in hippocampal, but not mPFC, engagement during continuous recognition. (C) 2016 Elsevier Ltd. All rights reserved.
It is well known that Shannon's rate-distortion function (RDF) in the colored quadratic Gaussian (QG) case can be parametrized via a single Lagrangian variable (the water level in the reverse water filling solutio...
详细信息
It is well known that Shannon's rate-distortion function (RDF) in the colored quadratic Gaussian (QG) case can be parametrized via a single Lagrangian variable (the water level in the reverse water filling solution). In this paper, we show that the symmetric colored QG multiple description (MD) RDF in the case of two descriptions can be parametrized in the spectral domain via two Lagrangian variables, which control the tradeoff between the side distortion, the central distortion, and the coding rate. This spectral-domain analysis is complemented by a time-domain scheme-design approach: we show that the symmetric colored QG MD RDF can be achieved by combining ideas of delta-sigma modulation and differential pulse-code modulation. In particular, two source prediction loops, one for each description, are embedded within a common noise-shaping loop, whose parameters are explicitly found from the spectral-domain characterization.
In this paper, we propose a new edge model for edge adaptive graph based transforms (LA-GHTs) in video compression. In particular, we consider step and ramp edge models to design graphs used for defining transforms, a...
详细信息
ISBN:
(纸本)9781467399616
In this paper, we propose a new edge model for edge adaptive graph based transforms (LA-GHTs) in video compression. In particular, we consider step and ramp edge models to design graphs used for defining transforms, and compare their performance on coding intra and inter predicted residual blocks. In order to reduce the signaling overhead of block-adaptive coding, a new edge coding method is introduced for the ramp model. Our experimental results show that the proposed methods outperform classical DCT-based encoding and that ramp edge models provide better performance than step edge models for intra predicted residuals.
The objective of technology-assisted review ("TAR") is to find as much relevant information as possible with reasonable effort. Quality is a measure of the extent to which a TAR method achieves this objectiv...
详细信息
ISBN:
(纸本)9781450340694
The objective of technology-assisted review ("TAR") is to find as much relevant information as possible with reasonable effort. Quality is a measure of the extent to which a TAR method achieves this objective, while reliability is a measure of how consistently it achieves an acceptable result. We are concerned with how to define, measure, and achieve high quality and high reliability in TAR. When quality is defined using the traditional goal-post method of specifying a minimum acceptable recall threshold, the quality and reliability of a TAR method are both, by definition, equal to the probability of achieving the threshold. Assuming this definition of quality and reliability, we show how to augment any TAR method to achieve guaranteed reliability, for a quantifiable level of additional review effort. We demonstrate this result by augmenting the TAR method supplied as the baseline model implementation for the TREC 2015 Total Recall Track, measuring reliability and effort for 555 topics from eight test collections. While our empirical results corroborate our claim of guaranteed reliability, we observe that the augmentation strategy may entail disproportionate effort, especially when the number of relevant documents is low. To address this limitation, we propose stopping criteria for the model implementation that may be applied with no additional review effort, while achieving empirical reliability that compares favorably to the provably reliable method. We further argue that optimizing reliability according to the traditional goal-post method is inconsistent with certain subjective aspects of quality, and that optimizing a Taguchi quality loss function may be more apt.
There is considerable evidence that prediction and attention aid perception. However, little is known about the possible neural mechanisms underlying the impact of prediction and unconscious attention on perception, p...
详细信息
There is considerable evidence that prediction and attention aid perception. However, little is known about the possible neural mechanisms underlying the impact of prediction and unconscious attention on perception, probably due to the relative neglect of unconscious attention in scholarly literature. Here, we addressed this issue using functional magnetic resonance imaging (fMRI). We adopted a variant of the double-cue paradigm, in which prediction and attention were factorially manipulated by two separate cues (prediction and attention cues). To manipulate consciousness, the attention cues were presented subliminally and supraliminally. Behaviorally, we reported an unconscious-attended effect in the predictable trials and a conscious-attended effect in the unpredictable trials. On the neural level, it was shown that prediction and unconscious attention interacted in the left dorsolateral prefrontal cortex (dlPFC). More specifically, there was a significantly decreased activation in dlPFC for predictable relative to unpredictable stimuli in the unconscious attended trials, but not in the unconscious-unattended trials. This result suggests that prediction and unconscious attention operate synergistically to facilitate stimulus processing. This is further corroborated by the subsequent functional connectivity analysis, which revealed increased functional connectivity between the left dlPFC and the premotor cortex for predictable versus unpredictable stimuli in the unconscious-attended trials. (C) 2016 Elsevier Inc. All rights reserved.
This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of ...
详细信息
This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form “prediction models” that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p = 0.125) that was physically different (a 30 ms vs. 60 ms duration difference in one condition and a 1000 Hz vs. 1500 Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely
Smartphones with various embedded sensors and wirelessly connected external sensors will enable new applications across a wide variety of domains. Continuous or long-term sensing, processing, and communication of sens...
详细信息
ISBN:
(纸本)9781509003044
Smartphones with various embedded sensors and wirelessly connected external sensors will enable new applications across a wide variety of domains. Continuous or long-term sensing, processing, and communication of sensor data using smartphones will consume a significant amount of energy of the resource-constrained smartphones. Compression techniques, including predictive coding (PC) and compressed sensing (CS), are promising ways of minimizing energy consumption. A number of compression-based proposals are available to improve the energy efficiency in smartphones. Most of these proposals are sensor and application-specific, and their sampling rates may not be adaptive. This article proposes an energy efficient data gathering (DG) framework based on compression techniques, in particular as a component of a middleware for the smartphones. The framework adaptively selects the best possible compression technique (CT) for a sensor DG from a list of CTs, using the application's requirements and contexts. This work also presents a CS-based PC (CPC) to minimize the learning cost in PC. The framework includes the CPC along with other CTs and exploits context-aware adaptive sampling rate to improve energy efficiency. An initial evaluation of the framework using two real datasets highlights the potential of it and the CPC.
In many cases, view synthesis prediction (VSP) for 3D video coding suffers from inaccurate depth maps (DMs) caused by non-reliable point correspondences in low textured areas. In addition, occlusion handling is a majo...
详细信息
ISBN:
(纸本)9781509059669
In many cases, view synthesis prediction (VSP) for 3D video coding suffers from inaccurate depth maps (DMs) caused by non-reliable point correspondences in low textured areas. In addition, occlusion handling is a major issue if prediction is performed from one reference view. In this paper, we address VSP for non-coplanar 3D video sequences modified by a linear DM enhancement model. In this model, points close to edges in the DM are treated as key points, on which depth estimation is assumed to work well. Other points are assumed to have unreliable depth information and the depth is modelled by the solution of the Laplace equation. As another preprocessing stage for DMs, the influence of guided image depthmap filtering is investigated. Moreover, we present an inpainting method for occluded areas, which is based on the solution of the Laplace equation and corresponding depth information. Combining these techniques and applying it to VSP for 3D video sequences, which are captured by an arc-arranged camera array, our approach outperforms state of the art models in coding efficiency. Experimental results show bit rate savings up to 4.3 % compared to HEVC-3D for coding two views and corresponding depth in an All Intra encoder configuration.
The problem of whether and how information is integrated across hierarchical brain networks embodies a fundamental tension in contemporary cognitive neuroscience, and by extension, cognitive neuropsychiatry. Indeed, t...
详细信息
暂无评论