We describe SORGATE, a procedure for extracting geometry and texture from images of solids of revolution (SORs). It uses multivariate optimization to determine the parameters of the camera in order to build the viewin...
详细信息
ISBN:
(纸本)9783030904395;9783030904388
We describe SORGATE, a procedure for extracting geometry and texture from images of solids of revolution (SORs). It uses multivariate optimization to determine the parameters of the camera in order to build the viewing transform, as well as to reconstruct the geometry of the SOR using the silhouette. In addition to individual image analyses, it can use the data extracted from the same SOR viewed from different directions to produce a single, composite texture which can be combined with their blended geometries to produce a reconstructed 3D model. No prior knowledge other than the object's rotational symmetry is required. Camera viewing parameters are derived directly from the image. SORGATE is useful when 3D modeling of SORs is needed yet direct measurement the physical objects is infeasible. As it does not require camera calibration, it is also fast, inexpensive, and practical. One use case might be for researchers and curators who wish to display and/or analyze the art on historical vases;metric reconstruction and proper texturing of the objects would allow this without requiring viewing in person.
Per-pixel Revolution Mapping is an image-based modeling and rendering technique or IBMR (image-based modeling and rendering). It consists of creating a virtual 3D objects without polygonal meshes. This technique uses ...
详细信息
Per-pixel Revolution Mapping is an image-based modeling and rendering technique or IBMR (image-based modeling and rendering). It consists of creating a virtual 3D objects without polygonal meshes. This technique uses a single RGBA texture that stores the needed data to generate the revolved surface. The main problem with this technique is that the 3D revolved models are rendered without realistic surface wrinkles. In this paper, we presented an improvement to enhance the realism of the 3D revolved models by combining the revolution mapping and the bump mapping. In order to synchronize between real-time depth scaling of the microrelief and the resulting shading, we have added a scaling factor that makes it possible to have a realistic depth animation. This new technique creates very convincing 3D models with realistic looking surface wrinkles and allows rendering at interactive frame rates.
We introduce an approach for analyzing Wikipedia and other text, together with online photos, to produce annotated 3D models of famous tourist sites. The approach is completely automated, and leverages online text and...
详细信息
We introduce an approach for analyzing Wikipedia and other text, together with online photos, to produce annotated 3D models of famous tourist sites. The approach is completely automated, and leverages online text and photo co-occurrences via Google image Search. It enables a number of new interactions, which we demonstrate in a new 3D visualization tool. Text can be selected to move the camera to the corresponding objects, 3D bounding boxes provide anchors back to the text describing them, and the overall narrative of the text provides a temporal guide for automatically flying through the scene to visualize the world as you read about it. We show compelling results on several major tourist sites.
We introduce a new representation for modeling and rendering plants and trees. The representation is multiresolution and can be automatically generated and rendered in an efficient way. For the trunk and branches we p...
详细信息
We introduce a new representation for modeling and rendering plants and trees. The representation is multiresolution and can be automatically generated and rendered in an efficient way. For the trunk and branches we procedurally build a multiresolution structure that preserves the visual appearance of a tree, when rendered at different levels of detail. For the leaves we build a hierarchy of images by pre-processing the botanical tree structure. Unlike other representations, the visual quality of our representation does not depend on viewing position and direction. Our models can be applied to any computer graphics area that requires modeling outdoor scenes like interactive walkthroughs and fly-by's, realistic rendering, simulation and computer games. (C) 2010 Published by Elsevier Ltd.
We introduce a new representation for modeling and rendering plants and trees. The representation is multiresolution and can be automatically generated and rendered in an efficient way. For the trunk and branches we p...
详细信息
We introduce a new representation for modeling and rendering plants and trees. The representation is multiresolution and can be automatically generated and rendered in an efficient way. For the trunk and branches we procedurally build a multiresolution structure that preserves the visual appearance of a tree, when rendered at different levels of detail. For the leaves we build a hierarchy of images by pre-processing the botanical tree structure. Unlike other representations, the visual quality of our representation does not depend on viewing position and direction. Our models can be applied to any computer graphics area that requires modeling outdoor scenes like interactive walkthroughs and fly-by’s, realistic rendering, simulation and computer games.
1. Background The virtual reality (VR) technology is now at the frontier of modern information science. VR is based on computer graphics, computer vision, and other fresh air topics in today's computer technology....
详细信息
1. Background
The virtual reality (VR) technology is now at the frontier of modern information science. VR is based on computer graphics, computer vision, and other fresh air topics in today's computer technology. Nowadays the VR technology has been applied successfully in variety of fields such as military simulation, industry, medical training and visualization, environment protection and entertainment.
We applied an image-based modeling and rendering technique to reduce the data size of a CG model to be obtained as a visualization result. To date, this technique has been applied to the reconstruction of 3D graphical...
详细信息
We applied an image-based modeling and rendering technique to reduce the data size of a CG model to be obtained as a visualization result. To date, this technique has been applied to the reconstruction of 3D graphical models from real objects;the size of the models can be reduced effectively because both color information and geometric information can be reduced. We applied a silhouette-and-voxel method that does not require design data for geometric simplification. Using our system, we simplified two models, one of which, involving medical data, was reduced by about 85% in file size. An accompanying subjective quality test showed that our approach maintains approximately the same visual quality as the geometric simplification method traditionally used.
We propose an image-based method for real-time modeling of seated humans using upper-body video sprites, which is suitable for applications such as teleconferencing and distance learning. A database of representative ...
详细信息
We propose an image-based method for real-time modeling of seated humans using upper-body video sprites, which is suitable for applications such as teleconferencing and distance learning. A database of representative video sprite sequences is pre-acquired and pre-uploaded to each remote rendering site. At run-time, for each input sprite, a closely matching sprite is located in the database and the index of the matching sprite is sent to the rendering site, which drastically reduces the data rate. Unlike other data compression methods, our method takes advantage of the limited number of significant body positions a participant assumes during a session. Exploiting redundancy between frames with distant time stamps enables aggressive compression rates with high visual and semantic fidelity.
The appearance of an object greatly changes under different lighting conditions. Even so, previous studies have demonstrated that the appearance of an object under varying illumination conditions can be represented by...
详细信息
The appearance of an object greatly changes under different lighting conditions. Even so, previous studies have demonstrated that the appearance of an object under varying illumination conditions can be represented by a linear subspace. A set of basis images spanning such a linear subspace can be obtained by applying the principal component analysis (PCA) for a large number of images taken under different lighting conditions. Since little is known about how to sample the appearance of an object in order to correctly obtain its basis images, it was a common practice to use as many input images as possible. In this study, we present a novel method for analytically obtaining a set of basis images of an object for varying illumination from input images of the object taken properly under a set of light sources, such as point light sources or extended light sources. Our proposed method incorporates the sampling theorem of spherical harmonics for determining a set of lighting directions to efficiently sample the appearance of an object. We further consider the issue of aliasing caused by insufficient sampling of the object's appearance. In particular, we investigate the effectiveness of using extended light sources for modeling the appearance of an object under varying illumination without suffering the aliasing caused by insufficient sampling of its appearance.
An effective method of depth imagebasedrendering is proposed by applying texture mapping onto virtual but pre-defined multiple planes in the scene, called virtual planes. The method allows the viewpoint either stati...
详细信息
ISBN:
(纸本)9781424409051
An effective method of depth imagebasedrendering is proposed by applying texture mapping onto virtual but pre-defined multiple planes in the scene, called virtual planes. The method allows the viewpoint either static or moving around, including crossing the plane of the source images. By this approach, the relief textures from depth images are mapped onto the virtual planes, and through the pre-warping process, the virtual planes are converted into standard polygonal textures. After the virtual plane mosaics, the resultant image that supports 3D objects and immersive scenes can be generated by polygonal texture mapping. In addition, both hardware and software implementation of the method can increase the power of conventional texture mapping in imagebasedrendering. In particular, the scope of the viewpoint could be extended into the inner space of depth images, and as a result, a novel solution is provided for constructing real-time walkthrough systems as well as for panoramic modeling fro in an arbitrary viewpoint in the depth image space.
暂无评论