During the stationary portion of neuron's spiking response to a stimulus, the stimulus could be coded in the average rate and, more elaborately, in the statistics of the sequence of interspike intervals. We use in...
详细信息
During the stationary portion of neuron's spiking response to a stimulus, the stimulus could be coded in the average rate and, more elaborately, in the statistics of the sequence of interspike intervals. We use information processing theory to explicitly define when interval coding occurs and quantify the coding gain beyond rate coding provided by the interval code. We explicitly find the interval distribution commensurate with average rate coding. When we analyzed optomotor neural responses recorded from the crayfish eye, we found little interval coding occurring despite stimulus-induced changes from a unimodal to a bimodal interval distribution. (C) 2004 Elsevier B.V. All rights reserved.
作者:
PIPER, JMedical Research Council
Clinical and Population Cytogenetics Unit Western General Hospital Edinburgh EH4 2XU Scotland
interval coding of binary images provides a representation in which operations such as AND, OR, XOR, dilation and erosion can be efficiently implemented on a serial computer. Using these procedures, various alternativ...
详细信息
interval coding of binary images provides a representation in which operations such as AND, OR, XOR, dilation and erosion can be efficiently implemented on a serial computer. Using these procedures, various alternative schemes for implementing Hilditch's skeletonisation method efficiently are investigated.
interval coding of binary images provides a representation in which the mathematical morphology operations of dilation and erosion by an arbitrary structuring element can be naturally and efficiently implemented on a ...
详细信息
interval coding of binary images provides a representation in which the mathematical morphology operations of dilation and erosion by an arbitrary structuring element can be naturally and efficiently implemented on a serial computer. Conventional 4- or 8-neighbour dilation and erosion are then only special cases. Euclidean mapping is investigated as an example.
In this article, we use the interval mathematics and targeted rounding by specific functions to establish a framework for interval quantization. The function approximation F-1d, that maps x to an interval [x(1), x(2))...
详细信息
In this article, we use the interval mathematics and targeted rounding by specific functions to establish a framework for interval quantization. The function approximation F-1d, that maps x to an interval [x(1), x(2))] such that x(1) is the largest floating point number less than or equal to x and x(2) is the smallest floating point number greater than or equal to x, is used to establish the sampling interval and the levels of interval quantization. We show that the interval quantization levels (N-j) represent the specific quantization levels (n(j)), that are comparable, according to Kulisch-Miranker order and are disjoint two by two. If an interval signal X[n] intercepts a quantization interval level N-j, then the quantized signal will be X-q [n] = N-j. Moreover, for the interval quantization error (E[n] = X-q[n] - X [n]) an estimate is shown due to the quantization step and the number of levels. It is also presented the definition of interval coding, in which the number of required bits depends on the amount of quantization levels. Finally, in an example can be seen that the the interval quantization level represent the classical quantization levels and the interval error represents the classical quantization error.
This paper describes two algorithms for skeletonization of 2-D binary images, each of which explicitly separates the two major aspects of skeletonization: 1) the identification of points critical to shape representati...
详细信息
This paper describes two algorithms for skeletonization of 2-D binary images, each of which explicitly separates the two major aspects of skeletonization: 1) the identification of points critical to shape representation and 2) the identification of further points necessary to preserve homotopy. Sets of points critical to shape representation are found by eroding the original image I with a nested sequence of structuring elements E(i), where E0 = {O} and E(i) subset-of E(i+1), corresponding to a generalized distance transform. In a generalization of the "morphological skeleton," at each iteration, the set of shape-related points M(i) is defined to be I - E(i-1) \ (I - E(i)) + D, where D is a suitable small structuring element. Points needed in addition to those of M(i), in order to preserve homotopy, may be found at each iteration by a search restricted to the "shell" I - E(i-1), \ I - E(i). By choosing appropriate {E(i)} and D, either algorithm is capable of producing a variety of skeletons corresponding to different distance functions. It is proved that if E(i) + D subset-or-equal-to E(i+1), then the original image can be reconstructed from the skeleton. In the case of the first algorithm, there are few restrictions on the set of structuring elements. It uses a simple search strategy to find points whose removal would alter homotopy. Examples of its application illustrate shape, connectivity, thinness, and image reconstruction aspects of skeletonization. The second, faster, algorithm has a more constructional approach to finding points necessary for preserving homotopy, which limits it to a more restricted set of structuring elements than the first algorithm. However, it may still be used with a variety of distance functions including "city block," "chessboard," and "octagon" distances and a set of approximations to Euclidean distance where precision may be traded against computational cost. "interval coding" of the binary images is used, resulting in computational
1. 1. Average entropy, percent total entropy, and the variability of entropy (standard deviation, coefficient of variation) were calculated on the basis of inequality test techniques that encode the serial relationshi...
详细信息
1. 1. Average entropy, percent total entropy, and the variability of entropy (standard deviation, coefficient of variation) were calculated on the basis of inequality test techniques that encode the serial relationships of 2–5 neuronal action potential intervals as a series of symbols. 2. 2. These entropy measures correlated with the number of patterns (symbol groups) whose probability of occurrence changed in a statistically significant manner in response to ethanol injections. 3. 3. The overall ‘message,’ or at least the change in the ‘message’ induced by ethanol injections, seems to have been encoded by the relative changes in the probability of occurrence of certain quadgrams (i.e. a grouping of four symbols based on the serial order relationship of five intervals). The ethanol induced ‘message’ may be based on the combination of certain quadgrams that increase in incidence and those that decrease in incidence (i.e. the quadgrams whose incidence changes in opposite directions, may be ‘linked’ in some way). This suggests some higher order pattern analysis or ‘byte’ processing by neurons in the central nervous system.
There are essentially two approaches in the Ziv-Lempel code to be used in the data compression. They have already been proposed also as the improvements of the Ziv-Lempel code, which can be considered as combinations ...
详细信息
There are essentially two approaches in the Ziv-Lempel code to be used in the data compression. They have already been proposed also as the improvements of the Ziv-Lempel code, which can be considered as combinations of the two methods. This paper shows that by executing the combination, not only are advantages produced, but also the structure of the Ziv-Lempel code is derived which realizes a more efficient coding. A precise data compression method is proposed based on the derived structure: The proposed method corresponds to an improvement of the LZ77 method. As the pointer information for the substrings for the coding the parameter corresponding to the length of the substring is encoded first, in contrast to most of the past methods. Then, utilizing the result, the coding method for the position of the reference substring is adjusted selectively. The proposed method is one method with the highest compession ability among the modifications of the Ziv-Lempel code.
Studies of dyadic interaction often examine infants' social exchanges with their caregivers in settings that constrain their physical properties (e.g., infant posture, fixed seating location for infants and adults...
详细信息
Studies of dyadic interaction often examine infants' social exchanges with their caregivers in settings that constrain their physical properties (e.g., infant posture, fixed seating location for infants and adults). Methodological decisions about the physical arrangements of interaction, however, may limit our ability to understand how posture and position shape them. Here we focused on these embodied properties of dyadic interaction in the context of object play. We followed 30 mother-infant dyads across the first year of life (at 3, 6, 9, and 12 months) and observed them during 5 min of play with a standard set of toys. Using an interval-based coding system, we measured developmental change in infant posture, how mothers and infants positioned themselves relative to one another, and how they populated interaction spaces with objects. Results showed that mother-infant dyads co-constructed interaction spaces and that the contributions of each partner changed across development. Dyads progressively adopted a broader spatial co-orientation during play (e.g., positioned at right angles) across the first year. Moreover, advances in infants' postural skills, particularly increases in the use of independent sitting in real time, uniquely predicted change in dyadic co-orientation and infants' actions with objects, independent of age. Taken together, we show that the embodied properties of dyadic object play help determine how interactions are physically organized and unfold, both in real time and across the first year of life.
暂无评论