An automated method of performing manipulations on facial video recordings and its applications to investigate human dynamic face perception was developed. The known camera parameters, head shape and rigid head pose f...
详细信息
ISBN:
(纸本)1581139144
An automated method of performing manipulations on facial video recordings and its applications to investigate human dynamic face perception was developed. The known camera parameters, head shape and rigid head pose for each recorded video frame allowed registration of texels in the texture map of the actor's head model with corresponding pixels in each video frame. The resulting manipulated texture for each video frame was reapplied to an actor's head model and the model was rendered either in isolation with altered rigid head motion or head shape, for manipulating the original facial texture of the video clip. The effects of motion on facial expression using a paradigm used in face recognition experiments was also investigated.
Possible methods for facilitating a more accurate perception of egocentric distance in immersive virtual environments (IVE) were developed. Two elements that were considered in the experiment include: to remove the po...
详细信息
ISBN:
(纸本)1581139144
Possible methods for facilitating a more accurate perception of egocentric distance in immersive virtual environments (IVE) were developed. Two elements that were considered in the experiment include: to remove the possibility of cognitive dissonance associated with the present virtual environment be different from the real environment and to analyze whether users be provided with short-range haptic feedback about the presence, size, and spatial location of a real object in the virtual environment. It was found that people walked more slowly in the IVE. It was also found that virtual environment (VE) was rendered in full three-dimensional (3D) using photorealistic textures, and was colocated with the real world environment.
Many common materials, including fruit, wax and human skin, are somewhat translucent. What makes an object look translucent or opaque? Here we use a recently developed computer graphics model of subsurface light trans...
详细信息
ISBN:
(纸本)1581139144
Many common materials, including fruit, wax and human skin, are somewhat translucent. What makes an object look translucent or opaque? Here we use a recently developed computer graphics model of subsurface light transport [Jensen, et al., 2001] to study the factors that determine perceived translucency. We discuss how physical factors, such as light-source direction can alter the apparent translucency of an object, finding that objects are perceived to be more translucent when illuminated from behind than in front. We also study the role of a range of image cues, including colour, contrast and blur, in the perception of translucency. Although we learn a lot about images of translucent materials, we find that many simple candidate sources of information fail to predict how translucent an object looks. We suggest that the visual system does not rely solely on these simple image statistics to estimate translucency: the relevant stimulus information remains to be discovered.
Studies have shown that although shading is an important shape cue, visual perception of surface shape from shading only is severely limited when the surface is viewed locally without other visual cues such as occludi...
详细信息
ISBN:
(纸本)1581139144
Studies have shown that although shading is an important shape cue, visual perception of surface shape from shading only is severely limited when the surface is viewed locally without other visual cues such as occluding contours [Mamassian and Kersten 1996;Erens et al. 1993], Research has shown that when the "right" texture is added to the surface, observers can reliably infer the 3D structure of the underlying shape. In our previous work we have found that the performance of subjects' shape judgment is significantly better when the shaded surface is textured with a principal direction oriented pattern than other directional texture following either a uniformly constant direction or varying non-geodesic paths unrelated to the surface geometry. In this paper we report our findings of a new study further investigating the effect of anisotropic textures on shape perception when the surface texture is represented in the form of a pattern of luminance variations as well as of surface relief variations. We hypothesized that 1) observers' performance would be better with relief textures than luminance textures, and that 2) it would be poorer with the anisotropic textures that do not follow the principal directions. The results confirmed both of our hypotheses.
3D graphic scenes are only correctly rendered for one viewpoint. Without laborious calibration, however, observers seldom view the monitor from this viewpoint. Even in visual experiments using headrests, inter-subject...
详细信息
ISBN:
(纸本)1581139144
3D graphic scenes are only correctly rendered for one viewpoint. Without laborious calibration, however, observers seldom view the monitor from this viewpoint. Even in visual experiments using headrests, inter-subject variability in head-size and eye position result in many subjects viewing the display "off-axis", producing well known distortions in perceptual judgments. The goal is to correctly render graphic displays for the application/experiment based on a simple set of perceptual judgments made by the user. We have two approaches. Our first approach uses point matches between points on a transparency and 3D haptic points that the user makes with the Phantom device. We use well-known calibration techniques from Computer Vision to estimate the transformation matrix between the mirror and the monitor and also the position of the eye. This method requires the presence of a 3D calibrated object (Phantom, in our case). Our second method uses the same transparency and user-adjustable points on the monitor to derive a transformation matrix between mirror and monitor, as well as the position of the eye. This method does not require the presence of a calibrated object and hence is more generally applicable.
We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take characteristics of human color perception into account to generate more appealing resul...
详细信息
ISBN:
(纸本)1581139144
We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of a set of the basic color categories, derived from our psycho-physiological experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories provide restrictions on the color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on this categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudo-contours appear in the resulting image. We present a variety of results and show that our color transformation performs a large, yet natural color transformation without any sense of incongruity, and that the resulting images automatically capture the characteristics of the color use of the reference image.
Two experiments were conducted to compare distance perception in real and virtual environments. In Experiment 1, adults estimated how long it would take to walk to targets in real and virtual environments by starting ...
详细信息
ISBN:
(纸本)1581139144
Two experiments were conducted to compare distance perception in real and virtual environments. In Experiment 1, adults estimated how long it would take to walk to targets in real and virtual environments by starting and stopping a stopwatch while looking at a target person standing between 20 and 120 ft away. The real environment was a large grassy lawn in front of a university building. We replicated this scene in our virtual environment using a nonstereoscopic, large screen immersive display system. We found that people underestimated time to walk in both environments for distances of 40-60 ft and beyond. However, time-to-walk estimates were virtually identical across the two environments. In Experiment 2, 10- and 12-year-old children and adults estimated time to walk in real and virtual environments both with and without vision. Adults again underestimated time to walk in both environments for distances of 60 ft and beyond. Again, their estimates were virtually identical in the real and virtual environment both with and without vision. Children's time-to-walk estimates were also very similar across the two environments under both viewing conditions. We conclude that distance perception may be better in virtual environments involving large screen immersive displays than those involving head mounted displays (HMDs).
It is difficult with most current computer interfaces to rotate a virtual object so that it matches the orientation of another virtual object. Times to perform this simple task can exceed 20 seconds whereas the same k...
详细信息
ISBN:
(纸本)1581139144
It is difficult with most current computer interfaces to rotate a virtual object so that it matches the orientation of another virtual object. Times to perform this simple task can exceed 20 seconds whereas the same kind of rotation can be accomplished with real objects and with some VR interfaces in less than two seconds. In many advanced 3D user interfaces, the hand manipulating a virtual object is not in the same place as the object being manipulated. The available evidence suggests that this is not usually a significant problem for manipulations requiring translations of virtual objects, but it is when rotations are required. We hypothesize that the problems may be caused by frame of reference effects - mismatches between the visual frame of reference and the haptic frame of reference. Here we report two experiments designed to study interactions between visual and haptic reference frames space. In our first study we investigated the effect of rotating the frame of the controller with respect to the frame of the object being rotated. We measured a broad U-shaped relationship. Subjects could tolerate quite large mismatches, but when the orientation mismatch approached 90 degrees performance deteriorated rapidly by up to a factor of 5. In our second experiment we manipulated both rotational and translational correspondence between visual and haptic frames of reference. We predicted that the haptic reference frame might rotate in egocentric coordinates when the input device was in a different location than the virtual object. The experimental results showed a change in the direction predicted;they are consistent with a rotation of the haptic frame of reference, although only by about half the magnitude predicted. Implications for the design of control devices are discussed.
Level of detail (LOD) rendering techniques reduce the geometric complexity of 3D models, sacrificing visual rendering quality in order to increase frame rendering rates. Perceptually adaptive LOD rendering techniques ...
详细信息
ISBN:
(纸本)1581139144
Level of detail (LOD) rendering techniques reduce the geometric complexity of 3D models, sacrificing visual rendering quality in order to increase frame rendering rates. Perceptually adaptive LOD rendering techniques take into account the characteristics of the human visual system to minimize visible artifacts attributable to the reduced LOD. While these techniques have been previously examined in the context of high-performance rendering systems, it is not clear whether the benefits will necessarily overcome the behavioral costs associated with a reduced LOD on ordinary desktop systems. To answer this question, two perceptually adaptive rendering techniques, one velocity-dependent and one gaze-contingent, were implemented in the Unreal™ rendering engine on a standard desktop computer and monitor. These techniques were evaluated in separate experiments where participants were required to perform a virtual search for a target object among distractor objects in a perceptually rendered virtual home interior using a mouse to rotate the viewport. In the first experiment, objects moving across the observer's field of view were rendered in less detail than stationary objects, taking advantage of the fact that visual sensitivity to the details of moving objects is substantially reduced. Reaction times to detect the target remained constant with decreasing detail, whereas reaction times to localize a target decreased. In a second experiment, an eye tracker was used to render objects at the point of gaze in more detail than objects in the periphery, taking advantage of the fact that visual sensitivity is greatest at that location. Reaction times to detect the target increased with decreasing detail, whereas reaction times to localize a target decreased. The results from these experiments suggest that a reduced LOD can impede target identification, however, the resultant increase in frame rates facilitates virtual interaction. Overall, the behavioral costs associated with perc
暂无评论